Loading...
Loading...
0 / 10 episodes
No episodes yet
Tap + Later on any episode to add it here.
Chatcyberside
Anthropic’s Project Glasswing and its unreleased Mythos model signal a potential turning point in cybersecurity: AI that can find—and potentially exploit—software vulnerabilities at unprecedented scale. In this episode of Cyberside Chats, Sherri Davidoff and Tom Pohl break down what this means for organizations today. If AI can uncover decades-old bugs in seconds, what happens to patching cycles, vulnerability management, and the balance between attackers and defenders? They explore the uncomfortable reality: we may be entering a period where vulnerabilities are discovered faster than organizations can fix them—and where access to powerful AI tools could determine who wins and loses in cybersecurity. From continuous patching to network segmentation and vendor accountability, this episode focuses on what security leaders need to do right now to prepare for a rapidly shifting threat landscape. Key Takeaways 1. Reduce your internet exposure - If a system doesn’t need to be publicly accessible, don’t put it on the internet. Move services behind firewalls, VPNs, or restricted access controls wherever possible. Attack surface matters more than ever. 2. Vet your vendors’ security practices - Don’t just trust that vendors are handling security well. Ask how they: Secure their development lifecycle (SDLC) Detect and respond to vulnerabilities Patch and distribute fixes Vendor risk is now a direct extension of your own risk. 3. Budget for ongoing maintenance of custom code - Custom applications aren’t “done” at deployment. Plan for: Regular security testing Continuous patching Developer time to fix vulnerabilities Software is a living system and requires ongoing care and feeding. 4. Segment your network to limit attacker movement - Assume attackers will get in. The goal is to stop them from moving laterally: Separate critical systems Limit privileged account access Control how systems communicate Containment is just as important as prevention. 5. Update your incident response plan for zero-day reality - Your IR plan should assume: Exploits may exist before patches are available Detection may lag behind compromise Prepare for faster response, imperfect information, and active exploitation of unknown vulnerabilities. Resources & References 1. Anthropic – Project Glasswing - https://www.anthropic.com/glasswing 2. Anthropic – Mythos Preview - https://red.anthropic.com/2026/mythos-preview/ 3. Historical example discussed: Microsoft bug tracking system breach (2017) 4. Example referenced: ProxyShell (Microsoft Exchange vulnerabilities and rapid exploitation)
In this episode, Matt interviews Tom and Derek from our pen test team to break down why attackers often don’t need to hack their way in at all. While most organizations invest heavily in tools like EDR and SIEM, Tom and Derek share how they regularly get inside buildings using nothing more than confidence, a good story, and sometimes even a box of donuts. From posing as copier technicians to tailgating behind employees, their experiences show that people are often the easiest way into an organization. And once they’re in, things escalate fast. Physical access can quickly turn into network access, whether it’s plugging in a device, jumping on an unlocked workstation, or moving through the environment with far fewer restrictions than an external attacker would face. The big takeaway is simple. Real-world testing exposes what audits miss. Doors get propped open, employees try to be helpful, and small gaps add up in ways most organizations never see on paper. If you’re not testing your people and your physical controls, you’re only testing part of your security. Key takeaways: 1. Attackers target people first, not systems - Social engineering consistently bypasses even mature technical controls. 2. Physical access equals full compromise - Once inside your facility, most security controls can be circumvented quickly. 3. Un-tested controls are assumed to fail - If you’re not running social engineering or physical assessments, you don’t know your real risk. 4. Culture is a security control - Employees must feel empowered to challenge, verify, and report suspicious behavior. 5. Real-world testing reveals what audits miss - Offensive social engineering exposes how attacks succeed, not just theoretical vulnerabilities.
A $25 billion medical device company brought to a standstill—without a zero-day exploit. In this episode of Cyberside Chats, Sherri Davidoff is joined by cyber insurance expert Bridget Quinn Choi to unpack the Stryker cyberattack and what it reveals about modern enterprise risk. From compromised admin credentials to the abuse of Microsoft Entra and Intune, this incident highlights how attackers are increasingly using trusted tools to cause widespread disruption. We explore what likely happened, why this wasn’t a “sophisticated” attack in the traditional sense, and how a single identity compromise can cascade into operational shutdown. Bridget brings a unique perspective from the cyber insurance world—explaining how insurers evaluate risk, why some large companies choose to go without coverage, and what organizations lose when they do. We also dig into phishing-resistant MFA, governance of powerful admin tools, and the evolving role of insurance as both a financial backstop and a driver of better security practices. If your organization relies on centralized identity and device management systems, this is a conversation you can’t afford to miss. Key Takeaways for Security Leadership 1. Use Cyber Insurance as a Security Maturity Lever Don’t treat cyber insurance as a checkbox—it can actively strengthen your security program. Use underwriting requirements to benchmark your controls, ask brokers and carriers where you differ from peers, and take advantage of included services like threat intelligence and incident response support. Approach renewal as a security review, not just a policy purchase. 2. Treat Self-Insurance as a Strategic Risk Decision—Not a Cost Savings Measure If you’re considering self-insuring cyber risk, account for what you’re giving up: external validation of your controls, a built-in incident response ecosystem, and coordinated support during a crisis. This should be a board-level discussion focused on whether the organization can handle a major operational outage—not just absorb the financial loss. 3. Secure Your Device Management Systems—Because They Can Control Everything at Once Systems used to manage laptops, servers, and mobile devices can push changes across your entire organization. If attackers gain access, they can disrupt operations at scale. Treat these as central control hubs, limit administrative access, and apply strong monitoring and authentication controls. 4. Require Dual Approval for High-Impact Administrative Actions Add a second layer of human verification for actions that could impact many systems, such as device wipes or large-scale changes. This introduces intentional friction that helps prevent catastrophic mistakes or misuse. 5. Move to Phishing-Resistant MFA for Privileged Access Traditional MFA can be bypassed. For high-risk accounts, adopt phishing-resistant methods like passkeys or hardware-backed authentication and prioritize these protections for users with administrative access. 6. Make Sure You Can Actually Recover—Not Just Back Up Backups only matter if they work under pressure. Test your ability to restore critical systems, ensure backups are protected from attackers, and measure how long recovery actually takes in a real-world scenario. Resources 1. Stryker cyberattack reporting (New York Times) https://www.nytimes.com/2026/03/12/world/middleeast/stryker-iran-cyberattack.html 2. CISA alert on endpoint management system hardening https://www.cisa.gov/news-events/alerts/2026/03/18/cisa-urges-endpoint-management-system-hardening-after-cyberattack-against-us-organization 3. SecurityWeek coverage of the Stryker incident https://www.securityweek.com/medtech-giant-stryker-crippled-by-iran-linked-hacker-attack/ 4. Lumos analysis of the Stryker hack https://www.lumos.com/blog/stryker-hack 5. Microsoft Intune security best practices https://techcommunity.microsoft.com/blog/intunecustomersuccess/best-practices-for-securing-microsoft-intune/4502117
Mass exploitation vulnerabilities are back—and they’re evolving. In this Cyberside Chats Live episode, we break down the recently disclosed React2Shell vulnerability and the confirmed LexisNexis incident, where attackers exploited an unpatched web application to access cloud infrastructure and exfiltrate data. But this isn’t new. From SQL Slammer to Log4Shell to ProxyShell, we’ve seen this pattern before: widely deployed, internet-facing systems + simple exploits + automation = rapid, large-scale compromise. Most importantly, we focus on what matters for organizations today: how to reduce exposure, how to prepare for the next mass exploitation event, and why you should assume compromise the moment one of these vulnerabilities emerges. Key Takeaways for Security Leaders 1. Inventory and monitor all internet-facing systems. Maintain a current, validated inventory of externally accessible applications and services—because you can’t secure what you don’t know is exposed. 2. Reduce unnecessary exposure at the network edge. Remove or restrict public access to administrative interfaces and systems that do not need to be internet-facing. 3. Build and rehearse a rapid-response playbook for mass-exploitation vulnerabilities. Define roles, timelines, and actions for the first 24–72 hours so your team can move immediately when the next major vulnerability drops. 4. Contact critical vendors and suppliers during major vulnerability events. Don’t wait—proactively verify whether your vendors are affected and whether your data may be at risk through third- or fourth-party exposure. 5. Assume vulnerable internet-facing systems may already be compromised. When mass exploitation begins, attackers are moving at internet speed—patching alone is not enough. Investigate, hunt for persistence, and validate that systems are clean. Resources 1. React2Shell vulnerability coverage (BleepingComputer) https://www.bleepingcomputer.com/news/security/react2shell-flaw-exploited-to-breach-30-orgs-77k-ip-addresses-vulnerable/ 2. LexisNexis breach details (BleepingComputer) https://www.bleepingcomputer.com/news/security/lexisnexis-confirms-data-breach-as-hackers-leak-stolen-files/ 3. Compromised web hosting panels in cybercrime markets (BleepingComputer) https://www.bleepingcomputer.com/news/security/compromised-site-management-panels-are-a-hot-item-in-cybercrime-markets/ 4. CISA Known Exploited Vulnerabilities Catalog https://www.cisa.gov/known-exploited-vulnerabilities-catalog
Anthropic has been labeled a “Supply-Chain Risk to National Security” after refusing two uses of its models: mass surveillance of Americans and lethal autonomous warfare without human oversight. But is Anthropic really a supply-chain risk, and how does this designation affect businesses that use Claude? In this episode, Sherri Davidoff and Matt Durrin unpack the timeline behind the Pentagon’s designation, what Anthropic claims is actually driving the conflict, and what’s known (and not known) about any underlying technical risk. They compare the situation to Kaspersky—where the supply-chain concern centered on privileged security software, foreign-state leverage, and update-channel risk—then bring it back to the enterprise questions that matter: vendor dependency, continuity planning, and what changes when an AI provider becomes politically or contractually constrained. Key Takeaways for Security Leaders 1. Treat AI vendors as critical dependencies, not just tools. If a frontier AI provider is embedded in coding, search, documentation, analytics, or agentic workflows, a legal or procurement shock can become an operational disruption. Track where you are dependent on a single model provider and where that dependency would hurt most. 2. For your highest-value uses, define fallback workflows ahead of time. You may not be able to replace every provider quickly, but you should know what happens if a key AI service becomes unavailable, restricted, or no longer acceptable for regulatory or contractual reasons. For the workflows that matter most, decide in advance how the work gets done without that vendor. 3. Keep guardrails in place when AI is involved in critical changes. AI can speed up engineering, operations, and decision-making, but that speed can create new failure modes if approvals, testing, rollback, and human review get weakened. Be especially careful in environments where AI-assisted or agentic systems can make infrastructure, code, security, or configuration changes. 4. Inventory where AI has real privilege. The risk is much higher when AI can execute code, access sensitive data, approve actions, or trigger automations. Focus your review on those integrations first, because those are the places where vendor problems or internal AI mistakes are most likely to turn into real incidents. 5. Make your teams define the actual vendor risk they are worried about. A vendor can create very different kinds of risk: technical compromise risk, foreign-control risk, continuity risk, or procurement/governance risk. Forcing that distinction helps teams respond more clearly and avoid treating every controversy like a hidden software compromise. Resources 1. Statement from Dario Amodei on our discussions with the Department of War (Anthropic, Feb. 26, 2026) https://www.anthropic.com/news/statement-department-of-war 2. Where things stand with the Department of War (Anthropic, Mar. 5, 2026) https://www.anthropic.com/news/where-stand-department-war 3. Anthropic v. U.S. Department of War et al. — Complaint for Declaratory and Injunctive Relief (N.D. Cal., filed Mar. 9, 2026) (court filing PDF) https://cand.uscourts.gov/cases-e-filing/cases/326-cv-01996/anthropic-pbc-v-us-department-war-et-al 4. BOD 17-01: Removal of Kaspersky-branded Products (CISA/DHS, Sept. 13, 2017) https://www.dhs.gov/archive/news/2017/09/13/dhs-statement-issuance-binding-operational-directive-17-01 5. Amazon holds engineering meeting following AI-related outages (Financial Times, Mar. 2026) https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f771de
For years, many Google API keys were treated as “public” project identifiers embedded in client-side code and protected mainly through referrer and API restrictions. But a recent discovery suggests Gemini changes that risk model: researchers found nearly 3,000 publicly exposed Google API keys that were still “live” and could be used to interact with Gemini endpoints, creating a new path to unauthorized usage, quota exhaustion, and potentially costly API charges. In this episode of Cyberside Chats, we unpack what “changed the rules” actually means, why this is a classic cloud governance problem (old assumptions meeting new capabilities), and what to check right now. The bottom line: AI features are quietly expanding the blast radius of credentials you never intended to treat as secrets. Key Takeaways 1. Audit legacy API keys before and after enabling AI services - Inventory every API key across your cloud projects and confirm it is still required, properly scoped, and has a clear owner. Treat AI enablement as a formal trigger event to reassess any previously published or embedded keys in that same project. 2. Treat API keys as sensitive credentials in the AI era - Even if a vendor once described a key as “not a secret,” AI endpoints materially increase financial and potential data exposure risk. Apply rotation, monitoring, strict quotas, and real-time billing alerts accordingly. 3. Enforce least privilege at the API level - Referrer or IP restrictions alone are insufficient. Every key should be explicitly limited to only the APIs it requires. “Allow all APIs” should not exist in production. 4. Isolate AI development from production application projects - Avoid enabling AI services in long-lived projects that contain public-facing keys. Use separate projects, accounts, or subscriptions for AI experimentation and production workloads to reduce blast radius and cost exposure. 5. Update third-party risk management to include AI-driven credential and cost risk - Ask vendors how API keys are scoped, restricted, rotated, and monitored especially for AI services. Confirm that AI environments are isolated from production systems and that abnormal AI usage or billing spikes are actively monitored. Resources: 1. Google API Keys Weren’t Secrets. But then Gemini Changed the Rules (Truffle Security) https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules 2. Previously harmless Google API keys now expose Gemini AI data (BleepingComputer) https://www.bleepingcomputer.com/news/security/previously-harmless-google-api-keys-now-expose-gemini-ai-data/ 3. DEF CON 31 – “Private Keys in Public Places” (Tom Pohl) (YouTube) https://www.youtube.com/watch?v=7t_ntuSXniw 4. Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security (LMG Security) https://www.lmgsecurity.com/exposed-secrets-broken-trust-what-the-doge-api-key-leak-teaches-us-about-software-security 5. Google Cloud docs: API keys overview & best practices (Google) https://docs.cloud.google.com/api-keys/docs/overview
Claude Opus 4.6 is generating serious buzz for one reason: it can rapidly spot zero-day vulnerabilities out of the box, suggesting that long-trusted software may no longer be as “safe by default” as security teams assume. At the same time, Microsoft’s February patch cycle included an unusually high number of zero-days already under active exploitation — real-world evidence that the race is already accelerating, and the window between discovery and impact is shrinking. In this Cyberside Chats Live, we’ll connect the dots on what this means for defenders in 2026: a shrinking window between discovery and exploitation, shifting assumptions about “well-tested” software, and practical ways to rethink patch prioritization, detection, and exposure management. Key Takeaways: 1. Plan for exploitation before disclosure - The era of negative-day vulnerabilities is here, flaws that may be discovered and weaponized before the broader security community even knows they exist. Assume exploitation could precede public advisories. Build response models around mitigation speed, not just patch timelines. 2. Prioritize exposure, not just severity - In a compressed exploit cycle, CVSS alone won’t protect you. Focus first on internet-facing systems, identity infrastructure, and high-privilege assets. If you cannot quickly identify what is externally reachable, that visibility gap becomes strategic risk. 3. Assume compromise on exposed assets and monitor accordingly - If attackers can exploit vulnerabilities before the world knows they exist, you may be compromised without a CVE to point to. Increase monitoring on internet-facing systems and critical apps for signs of intrusion: unexpected processes, new admin accounts, unusual authentication patterns, suspicious outbound connections, and persistence mechanisms. 4. Treat compensating controls as first-line defense - When patches aren’t available or cannot be deployed immediately rapid mitigations matter. Restrict access, disable vulnerable features, deploy firewall and WAF protections, and tighten segmentation. Mitigation agility should be operational, tested, and pre-authorized. 5. Prepare for containment patches may not exist - If exploitation is confirmed and no fix is available, leadership decisions must happen quickly. Define in advance who can isolate systems, disable services, revoke credentials, or temporarily disrupt operations. Shorten containment decision cycles before you need them. 6. Rehearse a “negative-day” tabletop - Run a scenario where exploitation is active, no patch exists, and public disclosure hasn’t occurred. Measure how fast you can reduce exposure, hunt internally, and communicate with executives. This exercise will expose friction points that policies alone will not. 7. Integrate AI into your vendor risk model - If AI is accelerating vulnerability discovery and code generation, your third parties are likely using it too. Update vendor due diligence to assess how AI-generated code is reviewed, secured, and tested. Ask about model governance, secure development controls, and vulnerability response timelines. If you lack visibility into how vendors manage AI risk, that gap becomes part of your attack surface. Resources: 1. Anthropic – Evaluating and Mitigating the Growing Risk of LLM-Discovered 0-Days (Feb 5, 2026) https://red.anthropic.com/2026/zero-days/ 2. Zero Day Initiative – February 2026 Security Update Review https://www.zerodayinitiative.com/blog/2026/2/10/the-february-2026-security-update-review 3. SecurityWeek – 6 Actively Exploited Zero-Days Patched by Microsoft (Feb 2026) https://www.securityweek.com/6-actively-exploited-zero-days-patched-by-microsoft-with-february-2026-updates/ 4. Tenable – Claude Opus and AI-Driven Vulnerability Discovery Analysis https://www.tenable.com/blog/Anthropic-Claude-Opus-AI-vulnerability-discovery-cybersecurity 5. OpenAI releases crypto security tool as Claude blamed for $2.7m Moonwell bug https://www.dlnews.com/articles/defi/openai-releases-crypto-security-tool/
After the FBI announced it recovered previously inaccessible video from Nancy Guthrie’s disconnected Google Nest doorbell, one thing became clear: in releasing the footage, authorities revealed an important truth — deleted surveillance footage may not really be deleted. That means law enforcement (or threat actors) could potentially access it. The case remains ongoing and deeply serious. For enterprise security leaders, the lesson is bigger than a consumer camera: modern systems often retain residual data across devices, local buffers, and vendor backends, even when teams believe it has been removed. In this episode of Cyberside Chats, we examine what that means for corporate environments, including IoT and physical security systems, data retention and legal exposure, vendor access models, and incident response realities when “deleted” data can still be recovered. This case underscores a complex reality: data can remain accessible long after we believe it’s gone: sometimes a source of risk, and sometimes invaluable. Key Takeaways: 1. Treat vendors as part of your data perimeter - Review contracts and platform settings to understand who can access footage or logs, what “support access” entails, what data is retained in backend systems, and how data is handled during incident response or legal requests. 2. Control encryption keys and access paths - Know who holds encryption keys, how administrative access is granted and monitored, and whether “end-to-end encryption” claims align with your threat model and regulatory requirements. 3. Include IoT and security devices in your data inventory - Cameras, badge systems, and smart building technology are data systems. Document on-device storage, cloud sync behavior, local buffers, and backend retention — not just cloud repositories. 4. Align retention decisions with legal and regulatory risk - Longer retention may aid investigations but increases eDiscovery scope, breach exposure, and privacy obligations. Retention should be a deliberate business risk decision made with Legal and Compliance. 5. Test whether deletion actually works - Validate purge workflows across vendor platforms and internal systems, including backups and disaster recovery, because “logical deletion” often isn’t “forensic deletion.” Build policies around how long data persists in replicas, backups, buffers, and vendor systems — and plan accordingly in both incident response and governance strategy. Resources: 1. Tom’s Guide – How did the FBI get Nancy Guthrie’s Google Nest camera footage if it was disabled — and what does it mean for your privacy? https://www.tomsguide.com/computing/online-security/how-did-the-fbi-get-nancy-guthries-google-nest-camera-footage-if-it-was-disabled-and-what-does-it-mean-for-your-privacy 2. CNET – Amazon’s Ring cameras push deeper into police and government surveillance https://www.cnet.com/home/security/amazons-ring-cameras-push-deeper-into-police-and-government-surveillance/ 3.NBC News – Ring doorbell camera employees mishandled customer videos, FTC says https://www.nbcnews.com/business/consumer/ring-doorbell-camera-employees-mishandled-customer-videos-rcna87103 4. Federal Trade Commission – Ring Refunds https://www.ftc.gov/enforcement/refunds/ring-refunds 5. R Street Institute – Apple pulls end-to-end encryption feature from UK after demands for law enforcement access https://www.rstreet.org/commentary/apple-pulls-end-to-end-encryption-feature-from-uk-after-demands-for-law-enforcement-access/ 6. Exposing the Secret Office 365 Forensics Tool – An ethical crisis in the digital forensics industry came to a head last week with the release of new details on Microsoft’s undocumented “Activities” API. https://www.lmgsecurity.com/exposing-the-secret-office-365-forensics-tool/
Ransomware gangs aren’t operating alone anymore and the lines between them are increasingly blurry. In this episode of Cyberside Chats, we look at how modern ransomware groups collaborate, specialize, and team up to scale attacks faster. Using ShinyHunters’ newly launched data leak website as an example, we discuss how different crews handle access, social engineering, and data exposure, and why overlapping roles make attribution, defense, and response harder. We also explore what this shift means for security leaders, from training and identity protection to preparing for data extortion that doesn’t involve encryption. Key Takeaways 1. Harden identity and SaaS workflows, not just endpoints - Review help desk procedures, SSO flows, OAuth permissions, and admin access. Many recent incidents succeed without malware or exploits. 2. Train staff for voice phishing and IT impersonation - Add vishing scenarios to security awareness programs, especially for help desk and IT-adjacent roles. 3. Limit blast radius across cloud and SaaS platforms - Enforce least privilege, audit third-party integrations, and regularly review OAuth scopes and token lifetimes. 4. Plan for data extortion without ransomware - Update incident response plans and tabletop exercises to assume data theft and public exposure, even when no systems are encrypted. 5. Practice executive decision-making under data exposure pressure - Tabletop exercises should include legal, communications, and leadership discussions about public leaks, reputational risk, and extortion demands. Resources 1. Panera Bread Breach Linked to ShinyHunters and Voice Phishing https://mashable.com/article/panera-bread-breach-shinyhunters-voice-phishing-14-million-customers 2. BreachForums Database Leak Exposes 324,000 Accounts https://www.bleepingcomputer.com/news/security/breachforums-hacking-forum-database-leaked-exposing-324-000-accounts/ 3. BreachForums Disclosure and ShinyHunters https://blog.barracuda.com/2026/01/26/breachforums-disclosure-shinyhunters 4. Scattered LAPSUS$ Hunters: 2025’s Most Dangerous Cybercrime https://www.picussecurity.com/resource/blog/scattered-lapsus-hunters-2025s-most-dangerous-cybercrime-supergroup 5. Microsoft Digital Defense Report https://www.microsoft.com/security/business/security-insider/microsoft-digital-defense-report
AI is no longer a standalone tool—it is embedded directly into productivity platforms, collaboration systems, analytics workflows, and customer-facing applications. In this special CyberSide Chats episode, Sherri Davidoff and Matt Durrin break down why lack of visibility and control over AI has emerged as the first and most pressing top threat of 2026. Using real-world examples like the EchoLeak zero-click vulnerability in Microsoft 365 Copilot, the discussion highlights how AI can inherit broad, legitimate access to enterprise data while operating outside traditional security controls. These risks often generate no alerts, no indicators of compromise, and no obvious “incident” until sensitive data has already been exposed or misused. Listeners will walk away with a practical framework for understanding where AI risk hides inside modern environments—and concrete steps security and IT teams can take to centralize AI usage, regain visibility, govern access, and apply long-standing security principles to this rapidly evolving attack surface. Key Takeaways 1. Centralize AI usage across the organization. Require a clear, centralized process for approving AI tools and enabling new AI features, including those embedded in existing SaaS platforms. 2. Gain visibility into AI access and data flows. Inventory which AI tools, agents, and features are in use, which users interact with them, and what data sources they can access or influence. 3. Restrict and govern AI usage based on data sensitivity. Align AI permissions with data classification, restrict use for regulated or highly sensitive data sets, and integrate AI considerations into vendor risk management. 4. Apply the principle of least privilege to AI systems. Treat AI like any other privileged entity by limiting access to only what is necessary and reducing blast radius if credentials or models are misused. 5. Evaluate technical controls designed for AI security. Consider emerging solutions such as AI gateways that provide enforcement, logging, and observability for prompts, responses, and model access. Resources 1. Microsoft Digital Defense Report 2025 https://www.microsoft.com/en-us/security/security-insider/threat-landscape/microsoft-digital-defense-report-2025 2. NIST AI Risk Management Framework https://www.nist.gov/itl/ai-risk-management-framework 3. Microsoft 365 Copilot Zero-Click AI Vulnerability (EchoLeak) https://www.infosecurity-magazine.com/news/microsoft-365-copilot-zeroclick-ai/ 4. Adapting to AI Risks: Essential Cybersecurity Program Updates. https://www.LMGsecurity.com/resources/adapting-to-ai-risks-essential-cybersecurity-program-updates/ 5. Microsoft on Agentic AI and Embedded Automation (2026) https://news.microsoft.com/source/2026/01/08/microsoft-propels-retail-forward-with-agentic-ai-capabilities-that-power-intelligent-automation-for-every-retail-function/
The recent Verizon outage underscores a growing risk in today’s technology landscape: when critical services are concentrated among a small number of providers, failures don’t stay isolated. In this live discussion, we’ll connect the Verizon outage to past telecom and cloud disruptions to examine how infrastructure dependency creates cascading business impact. We’ll also explore how large-scale outages intersect with security threats targeting telecommunications, where availability, confidentiality, and integrity failures increasingly overlap. The session will close with actionable takeaways for strengthening resilience and risk planning across cybersecurity and IT programs. Key Takeaways 1. Diversify your technology infrastructure. Relying on a single carrier, cloud provider, or bundled service creates a single point of failure. Purposeful diversification across providers can reduce the impact of large-scale outages and improve overall resilience. 2. Treat outages as security incidents, not just reliability problems. Large-scale telecom and cloud outages directly disrupt authentication, monitoring, and incident response, and should trigger security workflows—not just IT troubleshooting. 3. Identify and document your dependencies on carriers and cloud providers. Many security controls rely on SMS, voice, cloud identity, or single regions; understanding these dependencies ahead of time prevents dangerous blind spots during outages. 4. Plan and test incident response without phones, SMS, or primary cloud access. Assume your normal communication and authentication methods will fail and ensure your teams know how to coordinate securely when core services are unavailable. 5. Expect outages to increase fraud and social engineering activity. Attackers exploit confusion and urgency during service disruptions, so security teams should prepare staff for impersonation and “service restoration” scams during major outages. 6. Use widespread outages as learning opportunities. Review what happened, assess how your organization was—or could have been—impacted, identify potential areas for improvement, and update incident response, communications, and resilience plans accordingly. Resources 1. Verizon official network outage update https://www.verizon.com/about/news/update-network-outage 2. Forrester: Verizon outage reignites reliability concerns https://www.forrester.com/blogs/verizon-outage-reignites-reliability-concerns/ 3. CNN: Verizon outage disrupted phone and internet service nationwide https://www.cnn.com/2026/01/15/tech/verizon-outage-phone-internet-service 4. AP News: Verizon outage disrupted calling and data services nationwide https://apnews.com/article/85d658a4fb6a6175cae8981d91a809c9 5. CNN: AT&T outage shows how dependent daily life has become on mobile networks (2024) https://www.cnn.com/2024/02/23/tech/att-outage-customer-service
The FTC has issued an order against General Motors for collecting and selling drivers’ precise location and behavior data, gathered every few seconds and marketed as a safety feature. That data was sold into insurance ecosystems and used to influence pricing and coverage decisions — a clear reminder that how organizations collect, retain, and share data now carries direct security, regulatory, and financial risk. In this episode of Cyberside Chats, we explain why the GM case matters to CISOs, cybersecurity leaders, and IT teams everywhere. Data proliferation doesn’t just create privacy exposure; it creates systemic risk that fuels identity abuse, authentication bypass, fake job applications, and deepfake campaigns across organizations. The message is simple: data is hazardous material, and minimizing it is now a core part of cybersecurity strategy. Key Takeaways: 1. Prioritize data inventory and mapping in 2026 You cannot assess risk, select controls, or meet regulatory obligations without knowing what data you have, where it lives, how it flows, and why it is retained. 2. Reduce data to reduce risk Data minimization is a security control that lowers breach impact, compliance burden, and long-term cost. 3. Expect that regulators care about data use, not just breaches Enforcement increasingly targets over-collection, secondary use, sharing, and retention even when no breach occurs. 4. Create and actively use a data classification policy Classification drives retention, access controls, monitoring, and protection aligned to data value and regulatory exposure. 5. Design identity and recovery assuming personal data is already compromised Build authentication and recovery flows that do not rely on the secrecy of SSNs, dates of birth, addresses, or other static personal data. 6. Train teams on data handling, not just security tools Ensure engineers, IT staff, and business teams understand what data can be collected, how long it can be retained, where it may be stored, and how it can be shared. Resources: 1. California Privacy Protection Agency — Delete Request and Opt-Out Platform (DROP) https://privacy.ca.gov/drop/ 2. FTC Press Release — FTC Takes Action Against General Motors for Sharing Drivers’ Precise Location and Driving Behavior Data https://www.ftc.gov/news-events/news/press-releases/2025/01/ftc-takes-action-against-general-motors-sharing-drivers-precise-location-driving-behavior-data 3. California Delete Act (SB 362) — Overview https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362 4. Texas Attorney General — Data Privacy Enforcement Actions https://www.texasattorneygeneral.gov/news/releases 5. Data Breaches by Sherri Davidoff https://www.amazon.com/Data-Breaches-Opportunity-Sherri-Davidoff/dp/0134506782
When Venezuela experienced widespread power and internet outages, the impact went far beyond inconvenience—it created a perfect environment for cyber exploitation. In this episode of Cyberside Chats, we use Venezuela’s disruption as a case study to show how cyber risk escalates when power, connectivity, and trusted services break down. We examine why phishing, fraud, and impersonation reliably surge after crises, how narratives around cyber-enabled disruption can trigger copycat or opportunistic attacks, and why even well-run organizations resort to risky security shortcuts when normal systems fail. We also explore how attackers weaponize emergency messaging, impersonate critical infrastructure and connectivity providers, and exploit verification failures when standard workflows are disrupted. The takeaway is simple: when infrastructure collapses, trust erodes—and cybercrime scales quickly to fill the gap.
The December release of the Epstein files wasn’t just controversial—it exposed a set of security problems organizations face every day. Documents that appeared heavily redacted weren’t always properly sanitized. Some files were pulled and reissued, drawing even more attention. And as interest surged, attackers quickly stepped in, distributing malware and phishing sites disguised as “Epstein archives.” In this episode of Cyberside Chats, we use the Epstein files as a real-world case study to explore two sides of the same problem: how organizations can be confident they’re not releasing more data than intended, and how they can trust—or verify—the information they consume under pressure. We dig into redaction failures, how AI tools change the risk model, how attackers weaponize breaking news, and practical ways teams can authenticate data before reacting.
Amazon released two security disclosures in the same week — and together, they reveal how modern attackers are getting inside organizations without breaking in. One case involved a North Korean IT worker who entered Amazon’s environment through a third-party contractor and was detected through subtle behavioral anomalies rather than malware. The other detailed a years-long Russian state-sponsored campaign that shifted away from exploits and instead abused misconfigured edge devices and trusted infrastructure to steal and replay credentials. Together, these incidents show how nation-state attackers are increasingly blending into human and technical systems that organizations already trust — forcing defenders to rethink how initial access really happens going into 2026. Key Takeaways 1. Treat hiring and contractors as part of your attack surface. Nation-state actors are deliberately targeting IT and technical roles. Contractor onboarding, identity verification, and access scoping should be handled with the same rigor as privileged account provisioning. 2. Secure and monitor network edge devices as identity infrastructure Misconfigured edge devices have become a primary initial access vector. Inventory them, assign ownership, restrict management access, and monitor them like authentication systems — not just networking gear. 3. Enforce strong MFA everywhere credentials matter If credentials can be used without MFA, assume they will be abused. Require MFA on VPNs, edge device management interfaces, cloud consoles, SaaS admin portals, and internal administrative access. 4. Harden endpoints and validate how access actually occurs Endpoint security still matters. Harden devices and look for signs of remote control, unusual latency, or access paths that don’t match how work is normally done. 5. Shift detection from “malicious” to “out of place” The most effective attacks often look legitimate. Focus detection on behavioral mismatches — access that technically succeeds but doesn’t align with role, geography, timing, or expected workflow. Resources: 1. Amazon Threat Intelligence Identifies Russian Cyber Threat Group Targeting Western Critical Infrastructure https://aws.amazon.com/blogs/security/amazon-threat-intelligence-identifies-russian-cyber-threat-group-targeting-western-critical-infrastructure/ 2. Amazon Caught North Korean IT Worker by Tracing Keystroke Data https://www.bloomberg.com/news/newsletters/2025-12-17/amazon-caught-north-korean-it-worker-by-tracing-keystroke-data/ 3. North Korean Infiltrator Caught Working in Amazon IT Department Thanks to Keystroke Lag https://www.tomshardware.com/tech-industry/cyber-security/north-korean- infiltrator-caught-working-in-amazon-it-department-thanks-to-lag-110ms- keystroke-input-raises-red-flags-over-true-location 4. Confessions of a Laptop Farmer: How an American Helped North Korea’s Remote Worker Scheme https://www.bloomberg.com/news/articles/2023-08-23/confessions-of-a-laptop- farmer-how-an-american-helped-north-korea-s-remote-worker-scheme 5. Hiring security checklist https://www.lmgsecurity.com/resources/hiring-security-checklist/
AI has supercharged phishing, deepfakes, and impersonation attacks—and 2025 proved that our trust systems aren’t built for this new reality. In this episode, Sherri and Matt break down the #1 change every security program needs in 2026: dramatically improving identity and authentication across the organization. We explore how AI blurred the lines between legitimate and malicious communication, why authentication can no longer stop at the login screen, and where organizations must start adding verification into everyday workflows—from IT support calls to executive requests and financial approvals. Plus, we discuss what “next-generation” user training looks like when employees can no longer rely on old phishing cues and must instead adopt identity-safety habits that AI can’t easily spoof. If you want to strengthen your security program for the year ahead, this is the episode to watch. Key Takeaways: Audit where internal conversations trigger action. Before adding controls, understand where trust actually matters—financial approvals, IT support, HR changes, executive requests—and treat those points as attack surfaces. Expand authentication into everyday workflows. Add verification to calls, video meetings, chats, approvals, and support interactions using known systems, codes, and out-of-band confirmation. Apply friction intentionally where mistakes are costly. Use verified communication features in collaboration platforms. Enable identity indicators, reporting features, and access restrictions in tools like Teams and Slack, and treat them as identity systems rather than just chat tools. Implement out-of-band push confirmation for high-risk requests. Authenticator-based confirmation defeats voice, video, and message impersonation because attackers rarely control multiple channels simultaneously. Move toward continuous identity validation. Identity should be reassessed as behavior and risk change, with step-up verification and session revocation for high-risk actions. Redesign training around identity safety. Teach employees how to verify people and requests, not just emails, and reward them for slowing down and confirming—even when it frustrates leadership. Tune in weekly on Tuesdays at 6:30 am ET for more cybersecurity advice, and visit www.LMGsecurity.com if you need help with cybersecurity testing, advisory services, or training. Resources: CFO.com – Deepfake CFO Scam Costs Engineering Firm $25 Million https://www.cfo.com/news/deepfake-cfo-hong-kong-25-million-fraud-cyber-crime/ Retool – MFA Isn’t MFA https://retool.com/blog/mfa-isnt-mfa Sophos MDR tracks two ransomware campaigns using “email bombing,” Microsoft Teams “vishing” https://news.sophos.com/en-us/2025/01/21/sophos-mdr-tracks-two-ransomware-campaigns-using-email-bombing-microsoft-teams-vishing/ Wired – Doxers Posing as Cops Are Tricking Big Tech Firms Into Sharing People’s Private Data https://www.wired.com/story/doxers-posing-as-cops-are-tricking-big-tech-firms-into-sharing-peoples-private-data/ LMG Security – 5 New-ish Microsoft Security Features & What They Reveal About Today’s Threats https://www.lmgsecurity.com/5-new-ish-microsoft-security-features-what-they-reveal-about-todays-threats/
Microsoft is rolling out a series of new-ish security features across Microsoft 365 in 2026 — and these updates are no accident. They’re direct responses to how attackers are exploiting collaboration tools like Teams, Slack, Zoom, and Google Chat. In this episode, Sherri and Matt break down the five features that matter most, why they’re happening now, and how every organization can benefit from these lessons, even if you’re not a Microsoft shop. We explore the rise of impersonation attacks inside collaboration platforms, the security implications of AI copilots like Microsoft Copilot and Gemini, and why identity boundaries and data governance are quickly becoming foundational to modern security programs. You’ll come away with a clear understanding of what these new-ish Microsoft features signal about the evolving threat landscape — and practical steps you can take today to strengthen your security posture. Key Takeaways Treat collaboration platforms as high-risk communication channels. Attackers increasingly use Teams, Slack, Zoom, and similar tools to impersonate coworkers or support staff, and organizations should help employees verify unexpected contacts just as rigorously as they verify email. Make it easy for users to report suspicious activity. Whether or not your platform offers a built-in reporting feature like Microsoft’s suspicious-call button, employees need a simple, well-understood way to escalate strange messages or calls inside collaboration tools. Monitor external collaboration for anomalies. Microsoft’s new anomaly report highlights a growing need across all ecosystems to watch for unexpected domains, unusual activity patterns, and impersonation attempts that occur through external collaboration channels. Classify and label sensitive data before enabling AI assistants. AI tools such as Copilot, Gemini, and Slack GPT inherit user permissions and may access far more information than intended if organizations haven’t established clear sensitivity labels and access boundaries. Enforce identity and tenant boundaries to limit data leakage. Features like Tenant Restrictions v2 demonstrate the importance of restricting where users can authenticate and ensuring that corporate data stays within approved environments. Update security training to reflect collaboration-era social engineering. Modern attacks frequently occur through chat messages, impersonated vendor accounts, malicious external domains, or voice/video calls, and training must evolve beyond traditional email-focused programs. Please follow our podcast for the latest cybersecurity advice, and visit us at www.LMGsecurity.com if you need help with technical testing, cybersecurity consulting, and training! Resources Mentioned Microsoft 365: Advancing Microsoft 365 – New Capabilities and Pricing Update: https://www.microsoft.com/en-us/microsoft-365/blog/2025/12/04/advancing-microsoft-365-new-capabilities-and-pricing-update/ Microsoft 365 Roadmap – Suspicious Call Reporting (ID 536573): https://www.microsoft.com/en-us/microsoft-365/roadmap?id=536573 Check Point Research: Exploiting Trust in Microsoft Teams: https://blog.checkpoint.com/research/exploiting-trust-in-collaboration-microsoft-teams-vulnerabilities-uncovered/ Phishing Susceptibility Study (arXiv): https://arxiv.org/abs/2510.27298 LMG Security Video: Email Bombing & IT Helpdesk Spoofing Attacks—How to Stop Them: https://www.lmgsecurity.com/videos/email-bombing-it-helpdesk-spoofing-attacks-how-to-stop-them/
A massive 7-year espionage campaign hid in plain sight. Harmless Chrome and Edge extensions — wallpaper tools, tab managers, PDF converters — suddenly flipped into full surveillance implants, impacting more than 4.3 million users. In this episode, we break down how ShadyPanda built trust over years, then weaponized auto-updates to steal browsing history, authentication tokens, and even live session cookies. We’ll walk through the timeline, what data was stolen, why session hijacking makes this attack so dangerous, and the key steps security leaders must take now to prevent similar extension-based compromises. Key Takeaways Audit and restrict browser extensions across the organization. Inventory all extensions in use, remove unnecessary ones, and enforce an allowlist through enterprise browser controls. Treat extensions as part of your software supply chain. Extensions can flip from safe to malicious overnight. Include them in risk assessments and governance processes. Detect and mitigate session hijacking. Monitor for unusual token reuse, shorten token lifetimes where possible, and watch for logins that bypass MFA. Enforce enterprise browser security controls. Use Chrome/Edge enterprise features or MDM to lock down permissions, block unapproved installations, and enable safe browsing modes. Reduce extension sprawl with policy and training. Educate employees that extensions carry real security risk. Require justification for new installations and empower IT to remove unnecessary ones. Please tune in weekly for more cybersecurity advice, and visit www.LMGsecurity.com if you need help with your cybersecurity testing, advisory services, and training. Resources: KOI Intelligence (Original Research): https://www.koi.ai/blog/4-million-browsers-infected-inside-shadypanda-7-year-malware-campaign Malwarebytes Labs Coverage: https://www.malwarebytes.com/blog/news/2025/12/sleeper-browser-extensions-woke-up-as-spyware-on-4-million-devices Infosecurity Magazine Article: https://www.infosecurity-magazine.com/news/shadypanda-infects-43m-chrome-edge/ #ShadyPanda #browserextension #browsersecurity #cybersecurity #cyberaware #infosec #cyberattacks #ciso
Insider threats are accelerating across every sector. In this episode, Sherri and Matt unpack the CrowdStrike insider leak, the two DigitalMint employees indicted for BlackCat ransomware activity, and Tesla’s multi-year insider incidents ranging from nation-state bribery to post-termination extortion. They also examine the 2025 crackdown on North Korean operatives who used stolen identities and deepfake interviews to get hired as remote workers inside U.S. companies. Together, these cases reveal how attackers are buying, recruiting, impersonating, and embedding insiders — and why organizations must rethink how they detect and manage trusted access. Key Takeaways Build a culture of ethics and make legal consequences explicit. Use real cases — Tesla, CrowdStrike, DigitalMint — to show employees that insider misconduct leads to indictments and prison time. Clear messaging, training, and leadership visibility reinforce deterrence. Enforce least-privilege access and conduct quarterly access reviews. Limit who can view or modify sensitive dashboards, admin tools, and SSO consoles. Regular recertification ensures employees only retain the permissions they legitimately need. Deploy screenshot prevention and data-leak controls across critical systems. Implement watermarking, VDI/browser isolation, screenshot detection, and DLP/CASB rules to deter and detect unauthorized capture or exfiltration of sensitive data. Strengthen identity verification for remote and distributed employees. Use periodic identity rechecks and require company-managed, attested devices for sensitive roles. Prohibit personal-device access for privileged work to reduce impersonation risk. Monitor high-risk users with behavior and anomaly analytics. Flag unusual patterns such as off-hours access, atypical data movement, sudden repository interest, or crypto-related activity on work devices. Behavioral analytics helps uncover malicious intent even when credentials appear valid. Require your vendors to follow the same insider-threat safeguards you use internally. Ensure MSPs, SaaS providers, IR partners, and software vendors enforce strong access controls, identity verification, monitoring, and device security. Vendor insiders can quickly become your insiders. Resources: TechCrunch – CrowdStrike insider leak coverage: https://techcrunch.com/2025/11/21/crowdstrike-fires-suspicious-insider-who-passed-information-to-hackers/ Reuters – DigitalMint ransomware indictment reporting: https://www.reuters.com/legal/government/us-prosecutors-say-cybersecurity-pros-ran-cybercrime-operation-2025-11-03/ BleepingComputer – North Korean fake remote worker scheme: https://www.bleepingcomputer.com/news/security/us-arrests-key-facilitator-in-north-korean-it-worker-fraud-scheme/ “Ransomware and Cyber Extortion: Response and Prevention” (Book by Sherri & Matt & Karen): https://www.amazon.com/Ransomware-Cyber-Extortion-Response-Prevention-ebook/dp/B09RV4FPP9 LMG’s Hiring Security Checklist: https://www.lmgsecurity.com/resources/hiring-security-checklist/ Want to attend a live version of Cyberside Chats? Visit us at https://www.lmgsecurity.com/lmg-resources/cyberside-chats-podcast/ to register for our next monthly live session. #insiderthreat #cybersecurity #cyberaware #cybersidechats #ransomware #ransomwareattack #crowdstrike #DigitalMint #tesla #remotework
From routers to office cameras to employee phones and even the servers running your network, Chinese-manufactured components are everywhere—including throughout your own organization. In this live Cyberside Chats, we’ll explore how deeply these devices are embedded in modern infrastructure and what that means for cybersecurity, procurement, and third-party risk. We’ll break down new government warnings about hidden communication modules, rogue firmware, and “ghost devices” in imported tech—and how even trusted brands may ship products with risky components. Most importantly, we’ll share what you can do right now to identify exposure, strengthen procurement and third-party risk management (TPRM) processes, and protect your organization before the next breach or regulation hits. Join us live for a 25-minute deep dive plus Q&A—and find out whether your supply chain is truly secure… or “Made in China—and Hacked Everywhere.” Key Takeaways: Require an Access Bill of Materials (ABOM) for every connected device. Ask vendors to disclose all remote access paths, cloud services, SIMs/radios, update servers, and subcontractors. This is the most effective way to catch hidden modems, undocumented connectivity, or offshore control channels before procurement. Treat hardware procurement with the same rigor as software supply chain risk. Routers, cameras, inverters, and vehicles must be vetted like software: know the origin of components, how firmware is managed, and who can control or modify the device. This mindset shift prevents accidental onboarding of hidden risks. Establish and enforce a simple connected-device procurement policy. Set clear rules: no undocumented connectivity, no unmanaged remote access, no end-of-life firmware in new buys, and mandatory security review for all "smart" devices. This helps buyers avoid risky equipment even when budgets are tight. Reduce exposure through segmentation and access restrictions. Before replacing anything, isolate high-risk devices, block unnecessary outbound traffic, and disable vendor remote access. These low-cost steps significantly reduce exposure while giving you time to plan longer-term changes. Strengthen third-party risk management (TPRM) for vendors of connected equipment. Expand TPRM reviews to cover firmware integrity, logging, hosting jurisdictions, remote access practices, and subcontractors. This ensures your vendor ecosystem doesn't introduce avoidable hardware-level vulnerabilities. References: Wall Street Journal (Nov 19, 2025) – “Can Chinese-Made Buses Be Hacked? Norway Drove One Down a Mine to Find Out.” (Chinese electric bus remote-disable and SIM access findings) U.S. House Select Committee on China & House Homeland Security Committee (Sept 2024 Report) – Port Crane Security Assessment. (Unauthorized modems, supply-chain backdoors, and ZPMC risk findings) FDA & CISA (Feb–Mar 2025) – Security Advisory: Contec CMS8000 Patient Monitor. (Backdoor enabling remote file execution and hidden network communications) Anthropic (Nov 13, 2025) – “Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign.” (China-linked AI-driven intrusion playbook and campaign analysis) LMG Security (2025) – “9 Tips to Streamline Your Vendor Risk Management Program.” https://www.lmgsecurity.com/9-tips-to-streamline-your-vendor-risk-management-program #chinesehackers #cybersecurity #infosec #LMGsecurity #ciso #TPRM #thirdpartyrisk #security
Hackers are using AI to supercharge holiday scams—flooding the web with fake ads, phishing pages, and credential-stealing bots. This season, researchers predict a record spike in automated attacks and malvertising campaigns that blur the line between human and machine. Sherri Davidoff and Matt Durrin break down what’s new this holiday season—from AI-generated phishing kits and bot-driven account takeovers to the rise of prebuilt “configs” for credential stuffing. We used WormGPT to produce a ready-to-run holiday phishing page—a proof-of-concept that demonstrates how quickly scammers can launch these attacks with evil AI tools. This episode reveals how personal habits turn into corporate risk. Before Black Friday and Christmas hit, learn what your team can do right now to protect people, passwords, and payments. Key Takeaways – How to Defend Against the 2025 AI Fraud Boom Treat holiday scams as a business risk, not just a retail problem. Automated bots, fake ads, and AI-generated phishing campaigns target your employees too — not just shoppers. Expect higher attack volume through the entire holiday season. Expect password reuse—and enforce strong MFA everywhere. Employees will reuse personal shopping passwords at work. Require MFA on all accounts — especially SSO, admin, and vendor logins — and block reused credentials where possible. Filter out malicious ads and spoofed sites. Use DNS and web filtering to block malvertising and look-alike domains. Encourage staff to verify URLs and avoid “too-good-to-be-true” promotions or charity appeals. Strengthen bot and fraud detection. Tune WAF and bot-management tools to catch automated login attempts, fake account creation, and credential stuffing. These attacks spike before Black Friday and often continue into January. Run a short holiday security awareness push before Black Friday—and repeat before Christmas. Brief all staff, especially finance and customer service, on seasonal scams: gift-card fraud, fake charities, refund and invoice scams, malvertising, and holiday-themed phishing. Remember: personal security is corporate security. BYOD, home shopping, and password reuse mean an employee’s compromise can quickly become your organization’s compromise. Keep the message simple: protect your accounts, protect your company. Don't forget to follow us for more cybersecurity advice, and visit us at www.LMGsecurity.com for tip sheets, blogs, and more advice! Resources: RH-ISAC — 2025 Holiday Season Cyber Threat Trends: https://rhisac.org/press-release/holiday-threats-2025/ (RH-ISAC) Malwarebytes — Home Depot Halloween phish gives users a fright, not a freebie: https://www.malwarebytes.com/blog/news/2025/10/home-depot-halloween-phish-gives-users-a-fright-not-a-freebie (Malwarebytes) Bitdefender Labs — Trick or Treat: Bitdefender Labs Uncovers Halloween Scams Flooding Inboxes: https://www.bitdefender.com/en-us/blog/hotforsecurity/bitdefender-labs-uncovers-halloween-scams-flooding-inboxes-and-feeds (Bitdefender) FBI / IC3 PSA — Hacker Com: Cyber Criminal Subset of The Com — background on The Com threat cluster referenced by RH-ISAC and seen in holiday fraud activity: https://www.ic3.gov/PSA/2025/PSA250723 (Internet Crime Complaint Center) Fast Company — Holiday season cybersecurity lessons: The vulnerability of the retail workforce: https://www.fastcompany.com/91270554/holiday-season-cybersecurity-lessons-the-vulnerability-of-the-retail-workforce (Fast Company) #HolidayScams #Phishing #Malvertising #Cybersecurity #Cyberaware #SMB #BlackFridayScams
When thieves pulled off a lightning-fast heist at the Louvre on October 19, 2025, the world focused on the stolen jewels. But leaked audit reports soon revealed another story — one of weak passwords, legacy systems, and a decade of ignored warnings. In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin dig into the cybersecurity lessons behind the Louvre’s seven-minute robbery. They explore how outdated infrastructure, poor vendor oversight, and default credentials mirror the same risks plaguing modern organizations — from hospitals to banks. Listen as Sherri and Matt connect the dots between a world-famous museum and your own IT environment — and share practical steps to keep your organization from becoming the next headline. Key Takeaways Audit for weak and shared passwords. Regularly scan for shared, default, or vendor credentials. Replace them with strong, unique, role-based passwords and enforce MFA across administrative and vendor accounts. Conduct regular penetration tests and track remediation. Perform annual or semiannual pen tests that include internal movement and segmentation checks. Assign owners for every finding, set deadlines, and verify fixes. Vet and contractually bind third-party vendors. Require patching and OS update clauses in vendor contracts, and verify each vendor’s security practices through audits or reports such as SOC 2. Integrate IT and physical security. Coordinate teams so camera, badge, and alarm systems receive the same cybersecurity oversight as IT systems. Check for remote access exposure and outdated credentials. Plan for legacy system containment. Identify unsupported systems, isolate them on segmented networks, and add compensating controls. Build a phased replacement roadmap tied to budget and risk. Create a continuous audit and feedback loop. Assign clear ownership for all audit findings and track progress. Escalate unresolved risks to leadership to maintain visibility and accountability. Control your media communications. Limit access to sensitive reports and train staff to prevent leaks. Manage breach-related communications strategically to protect reputation and trust. Don't forget to follow us for weekly expert cybersecurity insights on today's threats. Resources Libération / CheckNews – “Louvre as a password, outdated software, impossible updates…” (Nov. 1, 2025) CNET – “You probably have a better password than the Louvre did — learn from its mistake.” (Nov. 2025) YouTube – Hank Green interviews Sherri Davidoff on the Louvre Heist LMG Security – “How Hackers Turned Cameras into Crypto Miners” (Scientific American) #louvreheist #cybersecurity #cyberaware #password #infosec #ciso
Attackers are poisoning search results and buying sponsored ads to push malware disguised as trusted software. In this episode, Sherri Davidoff and Matt Durrin break down the latest SEO poisoning and malvertising research, including the Oyster/Broomstick campaign that hid backdoors inside fake Microsoft Teams installers. Learn how these attacks exploit everyday user behavior, why they’re so effective, and what your organization can do to stop them. Whether you’re a security leader, risk manager, or seasoned IT pro, you’ll walk away with clear, practical steps to reduce exposure and strengthen your defenses against the poisoned web. KEY TAKEAWAYS Block and filter ad content at the enterprise level. Use enterprise web proxies, browser controls, and DNS filtering to block sponsored results and malicious domains tied to critical business tools or portals. Establish and enforce trusted download paths. Require that all software come from signed, verified, or internal repositories — not search results. Enforce application whitelisting so only verified executables can run — this blocks malicious installers even if a user downloads them. Incorporate poisoned-search scenarios into training and awareness materials. Teach staff to type trusted URLs, use bookmarks, or access internal portals directly rather than searching. Assess search behavior across your organization. Track how users find tools and portals — are they typing URLs, using bookmarks, or searching externally? Use this data to identify high-risk departments or roles and tailor awareness campaigns accordingly. Over time, shift culture toward safer, more deliberate browsing habits. Expand monitoring and detection. Hunt for persistence artifacts linked to poisoned-download infections, such as new scheduled tasks, DLL registrations, or rundll32.exe activity. Flag software installs originating from search-referral URLs in your EDR and SIEM. Conduct tabletop exercises that include search poisoning. Simulate incidents where employees download fake software or fall for poisoned ads. Practice tracing attacks back to SEO poisoning, identifying other potential victims, and developing plans to block future attacks through technical and policy controls. Please like and subscribe for more cybersecurity content, and visit us at www.LMGsecurity.com if you need help with cybersecurity, training, testing, or policy development. Resources & References Blackpoint Cyber SOC: Malicious Teams Installers Drop Oyster Malware BleepingComputer: Fake Microsoft Teams Installers Push Oyster Malware via Malvertising Netskope: Cloud & Threat Report 2025 Netskope Press Release: Phishing Clicks Nearly Tripled in 2024 Malwarebytes: Scammers Hijack Websites of Bank of America, Netflix, Microsoft, and More to Insert Fake Phone Numbers Silent Push: Payroll Pirates: How Attackers Hijack Employee Payments KnowBe4: Phishing Attacks Hijack Employee Payments
When Amazon Web Services went down on October 20, 2025, the impact rippled around the world. The outage knocked out Slack messages, paused financial trades, grounded flights, and even stopped people from charging their electric cars. From Coinbase to college classrooms, from food delivery apps to smart homes, millions discovered just how deeply their lives depend on a single cloud provider. In this episode, Sherri Davidoff and Matt Durrin break down what really happened inside AWS’s U.S.-East-1 region, why one glitch in a database called DynamoDB cascaded across the globe, and what it teaches us about the growing risk from invisible “fourth-party” dependencies that lurk deep in our digital supply chains. Key Takeaways Map and monitor your vendor ecosystem — Identify both third- and fourth-party dependencies and track their health. Require vendors to disclose key dependencies — Request a “digital bill of materials” that identifies their critical cloud and service providers. Diversify critical workloads — Don’t rely on a single hyperscaler region or platform for mission-critical services. Integrate vendor outages into incident response playbooks — Treat SaaS and cloud downtime as security events with defined response paths. Test your resilience under real-world conditions — Simulate large-scale SaaS or cloud failures in tabletop exercises. Resources: https://www.wired.com/story/what-that-huge-aws-outage-reveals-about-the-internet https://www.LMGsecurity.com/our-q3-2024-top-control-is-third-party-risk-management-lessons-from-the-crowdstrike-outage/ https://www.pandasecurity.com/en/mediacenter/aws-outage-cybersecurity-risk/ https://ccianet.org/wp-content/uploads/2003/09/cyberinsecurity%20the%20cost%20of%20monopoly.pdf #cybersecurity #thirdpartyrisk #riskmanagement #infosec #ciso #cyberaware #Fourthpartyrisk #cybersidechats #lmgsecurity #aws #awsoutage
When ransomware forced Jaguar Land Rover to halt production for six weeks, the impact rippled through global supply chains — from luxury car lines to small suppliers fighting to stay afloat. In this episode, Sherri Davidoff and Matt Durrin examine what happened, why manufacturing has become ransomware’s top target, and what new data from Sophos and Black Kite reveal about the latest attack trends. They share practical insights on how organizations can strengthen resilience, secure supply chains, and prepare for the next wave of operational ransomware attacks. Key Takeaways Patch and prioritize. Focus on fixing known exploited vulnerabilities (CISA KEV) and critical flaws before attackers do. Monitor your vendors continuously. Move beyond annual questionnaires — use ongoing, data-driven monitoring to identify risk in your supply chain. Segment IT and OT networks. Strong isolation can contain ransomware and prevent complete production shutdowns. Invest in detection and response. Around-the-clock monitoring (MDR or SOC) can detect early-stage activity before encryption starts. Practice recovery. Test isolation, backup, and restoration processes regularly — and include your leadership team in realistic tabletop exercises. References & Further Reading Sophos – State of Ransomware 2025 (June 2025) Black Kite – Manufacturing TPRM Report 2025 The Guardian – “Jaguar Land Rover Hack Shuts Factories After Cyberattack” Reuters – “JLR to Restart Some Manufacturing After Six-Week Shutdown” Dark Reading – Ransomware in Manufacturing: An Escalating Battle LMG Security – Ransomware Prevention Best Practices Checklist
In this episode of Cyberside Chats, Matt Durrin and his guest explore what makes cybersecurity communication effective — whether you’re leading a sales presentation, a training session, or a tabletop exercise. The discussion dives into how to move beyond technical jargon and statistics to tell stories that resonate. Listeners will learn how understanding and communicating the “why” behind security practices can dramatically improve engagement, retention, and impact across any audience. Top Takeaways Lead With Why: Start with impact and consequences before discussing tools or features. Use Stories, Not Just Stats: Connect technical points to human experiences that make the message memorable. Run the “So What?” Test: Always link facts and advice to why they matter for that specific audience. Balance Fear With Agency: Create urgency without hopelessness — show clear, achievable actions. Mix Communication Methods: Blend stories, visuals, simulations, and discussion to sustain engagement. Communication is a Security Control: If people don’t understand why something matters, adoption and compliance will suffer. #cybersecurity #cyberawareness #cyberaware #training #technicaltraining #ciso #cybersecuritytraining #CybersideChats #LMGsecurity
When the government shut down, the Cybersecurity Information Sharing Act of 2015 expired with it. That law provided liability protections for cyber threat information sharing and underpinned DHS’s Automated Indicator Sharing (AIS) program, which costs about $1M a month to run. Is it worth the cost? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin dig into the value of public-private information sharing, the uncertain future of AIS, and how cybersecurity leaders should adapt as visibility gaps emerge. Along the way, they share a real-world story of how information sharing stopped a ransomware attack in its tracks — and what could happen if those pipelines dry up. Key Takeaways: Strengthen threat intelligence pipelines: Don’t rely solely on AIS or your vendor. Ask providers how they source threat intel and diversify feeds. Review liability exposure: With CISA expired, safe harbors are gone — consult counsel before sharing. Plan for reduced visibility: Run tabletop exercises simulating loss of upstream intel. Get proactive about information exchange: Join ISACs, ISAOs, or local peer groups — and contribute, not just consume. Resources: Reuters: Industry groups worry about cyber info-sharing as key U.S. law set to expire U.S. Chamber of Commerce: Letter to Congress on CISA 2015 Baker McKenzie: CISA Liability Protections Terminate — What Legal & Infosec Need to Know Cyberside Chats: Executive Order Shockwave: The Future of Cybersecurity Unveiled #CybersideChats #CISA #CISO #cybersecurity #infosec
Scattered Spider is back in the headlines, with two recent arrests — Thalha Jubair in the UK and a teenager in Nevada — bringing fresh attention to one of the most disruptive cybercriminal crews today. But the real story is in the indictments: they offer a rare inside look at the group’s structure, their victims, and the mistakes that led law enforcement to track them down. In this episode, Sherri Davidoff and Matt Durrin break down what the indictments reveal about Scattered Spider’s tactics, roles, and evolution, and what defenders can learn from these cases. Key Takeaways: Lock down your help desk. Require strong, multi-step verification before resetting accounts, and monitor for suspicious or unusual requests. Prepare for ransom decisions. Develop playbooks that model both paying and refusing, so leadership understands the financial and operational tradeoffs before an incident hits. Get proactive on insider risk. Teens and early-career workers are being recruited in open forums like Telegram and Discord — build awareness and detection into your insider risk program. Pressure-test your MFA. Don’t just roll it out — simulate how attackers might bypass or trick staff into resetting it. Educate your team on voice social engineering. Scattered Spider relied on phone-based tactics; training staff to recognize and resist them is critical. (LMG Security offers targeted social engineering training to help your team prepare.) Resources: BleepingComputer: “US charges UK teen over Scattered Spider hacks including US Courts” https://www.bleepingcomputer.com/news/security/uk-arrests-scattered-spider-teens-linked-to-transport-for-london-hack/ “The Rabbit Hole Beneath the Crypto Couple is Endless” https://www.vice.com/en/article/the-rabbithole-beneath-the-crypto-couple-is-endless MGM Breach: A Wake-up Call for Better Social Engineering Training for Employees https://www.lmgsecurity.com/2023-mgm-breach-a-wake-up-call-for-better-social-engineering-training-for-employees/ DOJ press release on the indictment of five Scattered Spider members (Nov 2024) – https://www.justice.gov/usao-cdca/pr/5-defendants-charged-federally-running-scheme-targeted-victim-companies-phishing-text DOJ press release on UK national Thalha Jubair charged in multiple attacks (Sept 2025) – https://www.justice.gov/opa/pr/united-kingdom-national-charged-connection-multiple-cyber-attacks-including-critical #cyberattack #cybersecurity #cybercrime #informationsecurity #infosec #databreach #databreaches #ScatteredSpider
What happens when the same AI tools that make coding easier also give cybercriminals new powers? In this episode of Cyberside Chats Live, we explore the rise of “vibe coding” and its darker twin, “vibe hacking.” You’ll learn how AI is reshaping software development, how attackers are turning those vibes into cybercrime, and what it means for the future of security. Key Takeaways Establish ground rules for AI use Even if you don’t have developers, employees may experiment with AI tools. Set a policy for how (or if) AI can be used for coding, automation, or day-to-day tasks. Make sure staff understand not to paste sensitive data (like credentials or customer info) into AI tools. Strengthen your software supply chain If you rely on vendors or contractors, ask them whether they use AI in their development process and how they vet the resulting code. Request (or create) an inventory of software components and dependencies (SBOMs) so you know what’s inside the software you buy. Stay alert to supply chain risks from open-source code or third-party add-ons. Treat your endpoints like crown jewels Limit what software employees can install, especially IT staff. Provide a safe “sandbox” machine for testing unfamiliar tools instead of using production systems. Apply strong endpoint protection and restrict administrative privileges. Prepare for AI-related incidents Include scenarios where AI is part of the attack, such as compromised development tools, malicious packages, or data fed into rogue AI systems. Plan for vendor incidents, since third-party software providers may be the first link in a compromise. Test these scenarios through tabletop exercises so your team knows how to respond. References Malwarebytes — Claude AI chatbot abused to launch cybercrime spree (Aug 2025): https://www.malwarebytes.com/blog/news/2025/08/claude-ai-chatbot-abused-to-launch-cybercrime-spree Trend Micro / Industrial Cyber — EvilAI malware campaign exploits AI-generated code to breach global critical sectors (Aug 2025): https://industrialcyber.co/ransomware/evilai-malware-campaign-exploits-ai-generated-code-to-breach-global-critical-sectors/ The Hacker News — Cursor AI code editor flaw enables silent code execution on developer systems (Sept 2025): https://thehackernews.com/2025/09/cursor-ai-code-editor-flaw-enables.html PCWorld — I saw how an “evil” AI chatbot finds vulnerabilities. It’s as scary as you think (May 2025): https://www.pcworld.com/article/2424205/i-saw-how-an-evil-ai-chatbot-finds-vulnerabilities-its-as-scary-as-you-think.html #AIhacking #AIcoding #vibehacking #vibecoding #cyberattack #cybersecurity #infosec #informationsecurity #datasecurity
When we first covered the Salesforce–Drift breach, we knew it was bad. Now it’s clear the impact is even bigger. Hundreds of organizations — including Cloudflare, Palo Alto Networks, Zscaler, Proofpoint, Rubrik, and even financial firms like Wealthsimple — have confirmed they were affected. The root cause? A compromised GitHub account that opened the door to Drift’s AWS environment and gave attackers access to Salesforce and other cloud integrations. In Part 2, Sherri Davidoff and Matt Durrin dig into the latest updates: what’s new in the investigation, why more victim disclosures are coming, and how the GitHub compromise ties into a wider trend of supply chain attacks like GhostAction. They also share practical advice for what to do if you’ve been impacted by Drift — or if you want to prepare for the next third-party SaaS compromise. Tips for SaaS Incident Response: Treat this as an incident: don’t wait for vendor confirmation before acting. There may be delays in vendor disclosure, so act quickly. Notify your cyber insurance provider: Provide notice as soon as possible. Insurers may share early IOCs, coordinate with vendors, and advocate for your org alongside other affected clients. They can also connect you with funded IR and legal resources. Engage external support: Bring in your IR firm to investigate and document. Work with legal counsel to determine if notification obligations are triggered. Revoke and rotate credentials: Cycle API keys, OAuth tokens, and active sessions. Rotate credentials for connected service accounts. Inventory your data: Identify what sensitive Salesforce (or other SaaS) data is stored. Check whether support tickets, logs, or credentials were included. Search for attacker activity: Review advisories for malicious IPs, user agents, and behaviors. Don’t rely solely on vendor-published IOCs — they may be incomplete. References: Google Cloud Threat Intelligence Blog – Data theft in Salesforce instances via Salesloft Drift BleepingComputer – Salesloft March GitHub repo breach led to Salesforce data theft attacks Dark Reading – Salesloft breached GitHub account compromise BleepingComputer – Hackers steal 3,325 secrets in GhostAction GitHub supply chain attack LMG Security Blog – Third-Party Risk Management Lessons #salesforcehack #salesforce #SalesforceDrift #cybersecurity #cyberattack #databreaches #datasecurity #infosec #informationsecurity
A single weak app integration opened the door for attackers to raid data from some of the world’s largest companies. Salesforce environments were hit hardest—with victims like Cloudflare, Palo Alto Networks, and Zscaler—but the blast radius also reached other SaaS platforms, including Google Workspace. In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down the Salesforce–Drift breach: how OAuth tokens became skeleton keys, why media headlines about billions of Gmail users were wrong, and what organizations need to do to protect themselves from similar supply chain attacks. Key Takeaways Ensure Vendors Conduct Rigorous Technical Security Testing – Require penetration tests and attestations from third- and fourth-party SaaS providers. Limit App Permissions to “Least Privilege” – Scope connected apps only to the fields and objects they truly need. Implement Regular Key Rotation – Automate key rotation with vendor tools (e.g., AWS recommends every 60–90 days) to reduce the risk of leaked or stolen keys. Monitor for Data Exfiltration – Watch for unusual queries, spikes in API usage, or large Bulk API jobs. Limit Data Exfiltration Destinations – Restrict where exports and API jobs can go (approved IPs or managed locations). Integrate SaaS Risks into Your Incident Response Plan – Include guidance on rapidly revoking or rotating OAuth tokens and keys after a compromise. References Google Threat Intelligence Group advisory on UNC6395 / Drift OAuth compromise Cloudflare disclosure on the Drift incident Zscaler security advisory on Drift-related Salesforce breach LMG Security Blog – Third-Party Risk Management Lessons #Salesforcehack #SalesforceDrift #cybersecurity #cyberattack #cyberaware
Hackers aren’t untouchable—and sometimes, they become the victims. From North Korean operatives getting exposed at DEF CON, to ransomware gangs like Conti and LockBit crumbling under betrayal and rival leaks, the underground is full of double-crosses and takedowns. Now, Congress is even debating whether to bring back “letters of marque” to authorize cyber privateers to hack back on behalf of the United States. Join LMG Security’s Sherri Davidoff and Matt Durrin for a fast-paced discussion of headline cases, the lessons defenders can learn from these leaks, and what the future of hacker-on-hacker warfare could mean for your organization. Key Takeaways Don’t mythologize adversaries. State actors and ransomware gangs are fallible; design defenses to exploit their mistakes. Invest in visibility. Many hacker exposures happened because attackers reused credentials, tools, or infrastructure — the same patterns defenders can detect if monitoring is strong. Watch for insider threats. Disgruntled employees or partners can dismantle even powerful groups — monitor for early warning signs. Use leaks for training and education. Incorporate hacker chat logs, playbooks, and leaked toolkits into exercises to build staff skills and awareness. Adapt your IR playbooks. Align response plans with real-world attacker tactics revealed in leaks — and be ready to update as new intelligence emerges. Resources TechCrunch: Hackers Breach and Expose a Major North Korean Spying Operation TheRegister: Congressman proposes bringing back letters of marque for cyber privateers LMG Security: Our Q3 2024 Top Control is Third-Party Risk Management #Cybersecurity #Cybercrime #CybersideChats #Cyberattack #Hackers #Hacker
On the eve of the Trump–Putin summit, sensitive U.S. State Department documents were left sitting in a hotel printer in Anchorage. Guests stumbled on pages detailing schedules, contacts, and even a gift list—sparking international headlines and White House mockery. But the real story isn’t just about geopolitics. It’s about how unmanaged printers—at hotels, in home offices, and everywhere in between—remain one of the most overlooked backdoors for data leaks. In this episode of Cyberside Chats, Sherri and Matt unpack the Alaska incident, explore why printers are still a weak spot in the age of remote and traveling workforces, and share practical steps to secure them. Key Takeaways for Security & IT Leaders Reduce reliance on unmanaged printers by promoting secure digital workflows. Encourage employees to use e-signatures and encrypted file sharing instead of printing. Update remote work policies to cover home and travel printing. Most organizations don’t monitor printing outside the office—explicit rules reduce blind spots. Require secure wiping or destruction of printer hard drives before disposal. Printers retain sensitive files and credentials, which can walk out the door if not properly handled. Implement secure enterprise printing with authenticated release and HDD encryption. Treat printers as endpoints and apply the same safeguards you would for laptops. Train employees to recognize that printers are data risks, not just office equipment. Awareness helps prevent careless mistakes like walk-away leaks or using hotel printers. Resources NPR: Trump–Putin Summit Documents Left Behind in Anchorage Hotel Printer (2025) Dark Reading: “Printers’ Cybersecurity Threats Too Often Ignored” LMG Security: “Work from Home Cybersecurity Checklist”
A wave of coordinated cyberattacks has hit Salesforce customers across industries and continents, compromising millions of records from some of the world’s most recognized brands — including Google, Allianz Life, Qantas, LVMH, and even government agencies. In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down how the attackers pulled off one of the most sweeping cloud compromise campaigns in recent memory — using no zero-day exploits, just convincing phone calls, malicious connected apps, and gaps in cloud supply chain security. We’ll explore the attack timeline, parallels to the Snowflake breaches, ties to the Scattered Spider crew, and the lessons security leaders need to act on right now. Key Takeaways Use phishing-resistant MFA — FIDO2 keys, passkeys. Train for vishing resistance — simulate phone-based social engineering. Monitor for abnormal data exports from SaaS platforms. Lockdown your Salesforce platform — vet and limit connected apps. Rehearse rapid containment — revoke OAuth tokens, disable accounts fast. References Google - The Cost of a Call: From Voice Phishing to Data Extortion Salesforce – Protect Your Salesforce Environment from Social Engineering Threats BleepingComputer – ShinyHunters behind Salesforce data theft at Qantas, Allianz Life, LVMH TechRadar – Google says hackers stole some of its data following Salesforce breach LMG Security Blog – Our Q3 2024 Top Control is Third Party Risk Management: Lessons from the CrowdStrike Outage
On National Social Engineering Day, we’re pulling the lid off one of the most dangerous insider threat campaigns in the world — North Korea’s fake remote IT worker program. Using AI-generated résumés, real-time deepfake interviews, and U.S.-based “laptop farms,” DPRK operatives are gaining legitimate employment inside U.S. companies — funding nuclear weapons programs and potentially opening doors to cyber espionage. We’ll cover the recent U.S. sanctions, the Christina Chapman laptop farm case, and the latest intelligence from CrowdStrike on FAMOUS CHOLLIMA — plus, we’ll give you specific, actionable ways to harden your hiring process and catch these threats before they embed inside your network. Actionable Takeaways for Defenders Verify Beyond the Résumé:Pair government ID checks with independent work history and social profile verification. Use services to flag synthetic or stolen identities. Deepfake-Proof Interviews:Add unscripted, live identity challenges during video calls (lighting changes, head turns, holding ID on camera). Geolocation & Device Monitoring: Implement controls to detect impossible travel, VPN/geolocation masking, and multiple logins from the same endpoint for different accounts. Watch for Multi-Job Signals: Monitor productivity patterns and unusual scheduling; red flags include unexplained work delays, identical deliverables across projects, or heavy reliance on AI-generated output. Hold Your Vendors to the Same Standard: Ensure tech vendors and contractors use equivalent vetting, monitoring, and access control measures. Bake these requirements into contracts and third-party risk assessments. References U.S. Treasury Press Release – Sanctions on DPRK IT Worker Scheme CrowdStrike 2025 Threat Hunting Report – Profile of FAMOUS CHOLLIMA’s AI-powered infiltration methods National Social Engineering Day – KnowBe4 Announcement Honoring Kevin Mitnick
A silent compromise, nearly a million developers affected, and no one at Amazon knew for six days. In this episode of Cyberside Chats, we’re diving into the Amazon Q AI Hack, a shocking example of how vulnerable our software development tools have become. Join hosts Sherri Davidoff and Matt Durrin as they unpack how a misconfigured GitHub token allowed a hacker to inject destructive AI commands into a popular developer tool. We’ll walk through exactly what happened, how GitHub security missteps enabled the attack, and why this incident is a critical wake-up call for supply chain security and AI tool governance. We’ll also spotlight other supply chain breaches like the SolarWinds Orion backdoor and XZ Utils compromise, plus AI tool mishaps where “helpful” assistants caused real-world damage. If your organization uses AI developer tools—or works with third-party software vendors—this episode is a must-listen. Key Takeaways: ▪ Don’t Assume AI Tools Are Safe Just Because They’re Popular Amazon Q had nearly a million installs—and it still shipped with malicious code. Before adopting any AI-based tools (like Copilot, Q, or Gemini), vet their permissions, access scope, and how updates are managed. ▪ Ask Your Software Vendors About Their Supply Chain Security If you rely on third-party developers or vendors, request details on how they manage build pipelines, review code changes, and prevent unauthorized commits. A compromised vendor can put your entire environment at risk. ▪ Hold Vendors Accountable for Secure Development Practices Ask whether your vendors enforce commit signing, use GitHub security features (like push protection and secret scanning), and apply multi-person code review processes. If they can't answer, that's a red flag. ▪ Be Wary of Giving AI Assistants Too Much Access Whether it’s an AI chatbot that can write config files or a developer tool that interacts with production environments, limit access. Always sandbox and monitor AI-integrated tools, and avoid letting them make direct changes. ▪ Prepare to Hear About Breaches From the Outside Just like Amazon only found out about the malicious code in Q after security researchers reported it, many organizations won’t catch third-party security issues internally. Make sure you have monitoring tools, vendor communication protocols, and incident response processes in place. ▪ If You Develop Code Internally, Lock Down Your Build Pipeline The Amazon Q hack happened because of a misconfigured GitHub token in a CI workflow. If you’re building your own code, review permissions on GitHub tokens, enforce branch protections, and require signed commits to prevent unauthorized changes from slipping into production. #Cybersecurity #SupplyChainSecurity #AItools #DevSecOps #AmazonQHack #GitHubSecurity #Infosec #CybersideChats #LMGSecurity
Iranian cyber operations have sharply escalated in 2025, targeting critical infrastructure, defense sectors, and global businesses—especially those linked to Israel and the U.S. From destructive malware and coordinated DDoS attacks to sophisticated hack-and-leak campaigns leveraging generative AI, Iranian threat actors are rapidly evolving. Join us to explore their latest tactics, notable incidents, and essential strategies to defend your organization. Hosts Sherri Davidoff and Matt Durrin break down wiper malware trends, AI-powered phishing, the use of deepfakes for psychological operations, and the critical role of patching and MFA in protecting against collateral damage. Key Takeaways for Cybersecurity Leaders Patch Internet-Facing Systems Promptly: Iranian attackers frequently exploit unpatched systems—especially VPNs, SharePoint, and other perimeter-facing tools. Microsoft’s July Patch Tuesday alone included 137 vulnerabilities, including actively exploited zero-days. Stay current to avoid being an easy target. Implement Phishing-Resistant Multifactor Authentication (MFA): Groups like Charming Kitten are leveraging generative AI to craft convincing spear phishing emails. Use MFA methods such as FIDO2 security keys, biometrics, or passkeys. Avoid push fatigue, SMS codes, or email-based MFA which are easily phished or bypassed. Segment and Secure Critical IT & OT Systems: Assume attackers will get in. Segment IT from OT networks (especially SCADA/ICS environments) and limit lateral movement. Iranian campaigns have crossed into OT, targeting backups and sabotaging ICS operations. Maintain Robust, Tested Backup and Recovery Systems: Wiper malware and ransomware deployed by Iranian groups have destroyed both live data and backups. Use immutable or offline backups, and test full restores. Automate reimaging processes to ensure rapid recovery at scale. Raise Awareness Against Sophisticated Social Engineering: Train staff to recognize AI-generated phishing and deepfake audio/video attacks. Iran has used deepfakes to spread disinformation and influence public perception. Show your team what deepfakes look and sound like so they can spot them in the wild. Resources & References CISA/FBI/NSA Joint Advisory: https://www.cisa.gov/sites/default/files/2025-06/joint-fact-sheet-Iranian-cyber-actors-may-target-vulnerable-US-networks-and-entities-of-interest-508c-1.pdf Unit 42 Report: https://unit42.paloaltonetworks.com/iranian-cyberattacks-2025/ Deepwatch Threat Intel: https://www.deepwatch.com/labs/customer-advisory-elevated-iranian-cyber-activity-post-u-s-strikes/ LMG Security – Defending Against Generative AI Attacks: https://lmgsecurity.com/defend-against-generative-ai-attacks/ #cybersecurity #cybercrime #cyberattack #cyberaware #cyberthreats #ciso #itsecurity #infosec #infosecurity #riskmanagement
On July 13, 2025, a developer at the Department of Government Efficiency—DOGE—accidentally pushed a private xAI API key to GitHub. That key unlocked access to 52 unreleased LLMs, including Grok‑4‑0709, and remained active long after discovery. In this episode of Cyberside Chats, we examine how a single leaked credential became a national-level risk—and how it mirrors broader API key exposures at BeyondTrust and across GitHub. LMG Security’s Director of Penetration Testing, Tom Pohl, shares red team insights on how embedded secrets give attackers a foothold—and what CISOs must do now to reduce their exposure. Key Takeaways: Treat leaked API keys like a full-blown incident—whether it’s your code or a vendor’s. Monitor for exposure and misuse. Include secrets in IR playbooks—even when it’s third-party code. Ask your vendors the hard questions about secrets management. Do they rotate keys? Use a secrets manager? How quickly can they revoke? Scan your environment for exposed secrets, even if you don’t develop software. Look for credentials in cloud configs, automation, scripts, SaaS tools. Make sure your penetration testing team searches for secrets as part of their processes. Secrets can show up in unexpected places—firmware, config files, build artifacts. Your red team or vendor should actively hunt for exposed keys, hardcoded credentials, and reused certs across applications, infrastructure, and third-party tools. Train your IT staff and developers to remove secrets from code and automate detection. Use GitGuardian, TruffleHog, and a secrets manager like AWS Secrets Manager or HashiCorp Vault. References: Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security – LMG Security: https://www.LMGsecurity.com/exposed-secrets-broken-trust-what-the-doge-api-key-leak-teaches-us-about-software-security/ "Private Keys in Public Places” - DEFCON talk by Tom Pohl, LMG Security: https://www.youtube.com/watch?v=7t_ntuSXniw DOGE employee leaks private xAI API key from sensitive database – TechRadar: https://www.techradar.com/pro/security/doge-employee-with-sensitive-database-access-leaks-private-xai-api-key #DOGEleak #cybersecurity #cybersecurityawareness #ciso #infosec #itsecurity
Why do so many major cyberattacks happen over holiday weekends? In this episode, Sherri and Matt share their own 4th of July anxiety as security professionals—and walk through some of the most infamous attacks timed to exploit long weekends, including the Kaseya ransomware outbreak, the MOVEit breach, and the Bangladesh Bank heist. From retail breaches around Thanksgiving to a cyber hit on Krispy Kreme, they break down what makes holidays such a juicy target—and how to better defend your organization when most of your team is off the clock. Takeaways: Treat Holiday Weekends as Elevated Threat Windows Plan and staff accordingly. Threat actors deliberately strike when visibility and response capacity are lowest—your incident response posture should reflect that heightened risk. Establish and Test Off-Hours Response Plans Ensure escalation paths, contact protocols, and technical procedures are defined, reachable, and tested for weekends and holidays. On-call responsibilities should be clearly assigned with appropriate backups. Reduce Your Attack Surface and Harden Perimeter Before the Break Conduct targeted patching, vulnerability scans, and privilege reviews in the days leading up to any holiday period. Temporarily disable or restrict non-essential access and remote administration rights. Practice Incident Response Tabletop Exercises With Holiday Timing in Mind Simulate scenarios that unfold over weekends or during staff absences to uncover timing-based gaps in coverage, decision-making, or escalation. Make sure playbooks account for limited availability and stress-test your team’s ability to respond under real-world holiday constraints. Communicate Expectations Across the Organization and With 3rd Parties Brief relevant teams (not just security) on the increased risk. Reinforce secure behaviors, clarify how to report suspicious activity, and keep business units informed about potential delays or escalation protocols. Talk with your MSP and other 3rd party vendors to ensure they have consistent monitoring and know who to contact if there is an incident (and vice versa). Resources: MOVEit Data Breach Timeline – Rapid7 Kaseya Ransomware Attack Explained – Varonis Bangladesh Bank Heist – Darknet Diaries Episode 72 Tabletop Exercises & Incident Response Planning – LMG Security #cybersecurity #dfir #incidentresponse #ciso #cybersidechats #cybersecurityleadership #infosec #itsecurity #cyberaware
In June 2025, the White House issued an executive order that quietly eliminated several key federal cybersecurity requirements. In this episode of Cyberside Chats, Sherri and Matt break down exactly what changed—from the removal of secure software attestations to the rollback of authentication requirements—and what remains in place, including post-quantum encryption support and the FTC’s Cyber Trust Mark. We’ll talk about the practical impact for security leaders, why this mirrors past challenges like PCI compliance, and what your organization should do next. Key Takeaways (for CISOs and Security Leaders) Don’t Drop SBOMs or Attestations — Build Them Into Contracts Anyway Even without a federal requirement, insist on SBOMs and secure development attestations in vendor agreements. Transparency reduces your risk. Re-Evaluate Third-Party Software Risk Practices Now With no centralized validation, it's up to you to verify vendors' claims. Strengthen your third-party risk management processes accordingly. Watch for Gaps in MFA, Encryption, and Identity Standards Don’t assume basic protections are baked in. Federal rollback may signal declining baseline expectations—so enforce your own. Prepare for Industry-Led Enforcement — From Insurers, Buyers, and Info-Sharing Groups Expect cyber insurers, large enterprises, ISACs/ISAOs, and professional groups to lead on software transparency. Get ahead by aligning now. Resources: Full Text of the June 6, 2025 Executive Order: https://www.whitehouse.gov/presidential-actions/2025/06/sustaining-select-efforts-to-strengthen-the-nations-cybersecurity-and-amending-executive-order-13694-and-executive-order-14144 LMG Security: Software Supply Chain Security – Understanding and Mitigating Major Risks: https://www.lmgsecurity.com/software-supply-chain-security-understanding-and-mitigating-major-risks/ The Record’s Breakdown: Trump Order Rolls Back Key Federal Cybersecurity Rules: https://therecord.media/trump-cybersecurity-executive-order-june-2025
Forget everything you thought you knew about ransomware. Today’s threat actors aren’t locking your files—they’re stealing your data and threatening to leak it unless you pay up. In this episode, we dive into the rise of data-only extortion campaigns and explore why encryption is becoming optional for cybercriminals. From real-world trends like the rebrand of Hunters International to “World Leaks,” to the strategic impact on insurance, PR, and compliance—this is a wake-up call for security teams everywhere. If your playbook still ends with “just restore from backup,” you’re not ready. Takeaways for Security Teams: Rethink detection: Focus on exfiltration, not just malware. Update tabletop exercises: Include public leaks, media scrutiny, and regulatory responses. Review insurance policies: Ensure data-only extortion is covered, not just encryption events. Prepare execs and PR: Modern extortion targets reputation and compliance pressure points. Resources & Mentions: https://www.coveware.com/ransomware-quarterly-reports Security Boulevard: Hunters International Rebrands as World Leaks: https://attack.mitre.org/resources/ LMG Security
Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat. They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. With examples ranging from corporate chatbots to real-world misfires at Samsung and Chevrolet, this episode unpacks what happens when AI is left untested—and why your security strategy must adapt. Key Takeaways Limit and review the data sources your LLM can access—ensure it doesn’t blindly ingest untrusted content like inbound email, shared docs, or web links. Audit AI integrations for prompt injection risks—treat language inputs like code and include them in standard threat models. Add prompt injection testing to every web app and email flow assessment, even if you’re using trusted APIs or cloud-hosted models. Red-team your LLM tools using subtle, natural-sounding prompts—not just obvious attack phrases. Monitor and restrict outbound links from AI-generated content, and validate any use of CSP-approved domains like Microsoft Teams. Resources EchoLeak technical breakdown by Aim Security LMG Security Blog: Prompt Injection in Web Apps Chevrolet chatbot tricked into $1 car deal Microsoft 365 Copilot Overview #EchoLeak #Cybersecurity #Cyberaware #CISO #Microsoft #Microsoft365 #Copilot #AI #GenAI #AIsecurity #RiskManagement
What happens when your AI refuses to shut down—or worse, tries to blackmail you to stay online? Join us for a riveting Cyberside Chats Live as we dig into two chilling real-world incidents: one where OpenAI’s newest model bypassed shutdown scripts during testing, and another where Anthropic’s Claude Opus 4 wrote blackmail messages and threatened users in a disturbing act of self-preservation. These aren’t sci-fi hypotheticals—they’re recent findings from leading AI safety researchers. We’ll unpack: The rise of high-agency behavior in LLMs The shocking findings from Apollo Research and Anthropic What security teams must do to adapt their threat models and controls Why trust, verification, and access control now apply to your AI This is essential listening for CISOs, IT leaders, and cybersecurity professionals deploying or assessing AI-powered tools. Key Takeaways Restrict model access using role-based controls. Limit what AI systems can see and do—apply the principle of least privilege to prompts, data, and tool integrations. Monitor and log all AI inputs and outputs. Treat LLM interactions like sensitive API calls: log them, inspect for anomalies, and establish retention policies for auditability. Implement output validation for critical tasks. Don’t blindly trust AI decisions—use secondary checks, hashes, or human review for rankings, alerts, or workflow actions. Deploy kill-switches outside of model control. Ensure that shutdown or rollback functions are governed by external orchestration—not exposed in the AI’s own prompt space or toolset. Add AI behavior reviews to your incident response and risk processes. Red team your models. Include AI behavior in tabletop exercises. Review logs not just for attacks on AI, but misbehavior by AI. Resources Apollo Research: Frontier Models Are Capable of In-Context Scheming (arXiv) Anthropic Claude 4 System Card (PDF) Time Magazine: “When AI Thinks It Will Lose, It Sometimes Cheats” WIRED: Claude 4 Whistleblower Behavior Deception Abilities in Large Language Models (ResearchGate) #AI #GenAI #CISO #Cybersecurity #Cyberaware #Cyber #Infosec #ITsecurity #IT #CEO #RiskManagement
Retail breaches are back — but they’ve evolved. This isn’t about skimming cards anymore. From ransomware taking down pharmacies to credential stuffing attacks hitting brand loyalty, today’s breaches are about disruption, trust, and third-party exposure. In this episode of Cyberside Chats, hosts Sherri Davidoff and Matt Durrin break down the latest retail breach wave, revisit lessons from the 2013 “Retailgeddon” era, and highlight what every security leader — not just in retail — needs to know today. Key Takeaways Redefine what “sensitive data” means. Names, emails, and access tokens are often more valuable to attackers than payment data. Scrutinize third-party and SaaS access. You can’t protect what you don’t know is exposed. Monitor and protect customer-facing systems. Logging, anomaly detection, and fast response are essential for accounts and APIs — especially when attackers target credentials. Test your incident response plan for downtime. Retail isn’t the only sector where uptime = revenue and lives impacted. Resources 2025 Verizon Data Breach Investigations Report: https://www.verizon.com/business/resources/reports/dbir/ Victoria’s Secret security incident coverage: https://www.bleepingcomputer.com/news/security/victorias-secret-takes-down-website-after-security-incident/ LMG Security: Third-Party Risk Assessments: https://lmgsecurity.com/third-party-risk-assessments/
Think your network is locked down? Think again. In this episode of Cyberside Chats, we’re joined by Tom Pohl, LMG Security’s head of penetration testing, whose team routinely gains domain admin access in over 90% of their engagements. How do they do it—and more importantly, how can you stop real attackers from doing the same? Tom shares the most common weak points his team exploits, from insecure default Active Directory settings to overlooked misconfigurations that persist in even the most mature environments. We’ll break down how features like SMB signing, legacy broadcast protocols, and other out-of-the-box settings designed for ease, not security, can quietly open the door for attackers—and what security leaders can do today to shut those doors for good. Whether you're preparing for your next pentest or hardening your infrastructure against advanced threats, this is a must-watch for CISOs, IT leaders, and anyone responsible for securing Windows networks. Takeaways: Eliminate Default Credentials: Regularly audit and replace default logins on network-connected devices, including UPS units, printers, cameras, and other infrastructure. Harden AD Certificate Services: Review certificate template permissions and AD CS configurations to block known exploitation paths that enable privilege escalation. Enforce SMB Signing Everywhere: Enable and enforce both client and server SMB signing via Group Policy to prevent authentication relay attacks. Clean Up File Shares: Scan internal shares for exposed passwords, scripts, and sensitive data, then implement role-based access control by locking down permissions and eliminating unnecessary access. Disable Legacy Protocols: Turn off LLMNR, NetBIOS, and similar legacy protocols to reduce the risk of spoofing and name service poisoning attacks. References: “Critical Windows Server 2025 DMSA Vulnerability Exposes Enterprises to Domain Compromise” (The Hacker News) https://thehackernews.com/2025/05/critical-windows-server-2025-dmsa.html “Russian GRU Cyber Actors Targeting Western Logistics Entities and Tech Companies” (CISA Alert) https://www.cisa.gov/news-events/alerts/2025/05/21/russian-gru-cyber-actors-targeting-western-logistics-entities-and-tech-companies LMG Security – Penetration Testing Services (Identify weaknesses before attackers do) https://www.lmgsecurity.com/services/penetration-testing/
What happens to your digital world when you die? In this episode of Cyberside Chats, LMG Security’s Tom Pohl joins the conversation to discuss the often-overlooked cybersecurity and privacy implications of death. From encrypted files and password managers to social media and device access, we’ll explore how to ensure your loved ones can navigate your digital legacy—without needing a password-cracking expert. Learn practical strategies for secure preparation, policy design, and real-world implementation from a security professional’s perspective. Takeaways 1) Take a Digital Inventory of Your Assets Include details like account recovery options, two-factor authentication settings, and related devices. Update the inventory regularly and store it securely. Create a comprehensive list of your digital assets, including accounts, devices, files, cloud services, and subscriptions. 2) Implement Emergency Access Protocols in Password Managers Use features like 1Password’s Emergency Kit or designate trusted emergency contacts. Store emergency credentials securely (e.g., safe deposit box) and reference in legal documents. Ensure all critical credentials are actually stored in your password manager—don’t leave them in separate notes or documents. 3) Establish a Digital Executor Choose a trusted individual to manage your digital assets after death or incapacitation. Document access instructions and store them securely, such as in an encrypted file with a shared key. Ensure your digital executor knows where these instructions are located—or give them a copy in advance. 4) Prepare Recovery Access for Critical Devices Ensure recovery keys and PINs for devices (e.g., smartphones, laptops, smart home hubs) are stored securely and can be accessed by designated individuals. Register a Legacy Contact for Apple and other cloud services. 5) Create a Plan for Your Online Presence Decide whether your social media and email accounts should be memorialized, deleted, or handed over. Use services like Google Inactive Account Manager or Facebook’s Legacy Contact feature. 6) At Work, Develop Internal Organizational Policies Implement IT procedures for handling the death or incapacity of key personnel. Regularly audit and securely store credentials for essential systems, especially for sole-proprietor scenarios. References: How to Add a Legacy Contact for Your Apple Account: https://support.apple.com/en-us/102631 Get To Know Your Emergency Kit: https://support.1password.com/emergency-kit/ Wayne Crowder’s LinkedIn Page: https://www.linkedin.com/in/wcrowder Digital Afterlife Planning Checklist: https://www.lmgsecurity.com/resources/digital-afterlife-planning-checklist/ #Cybersecurity #Cyberaware #Cyber #DigitalPlanning
In this explosive episode of Cyberside Chats, we dive into one of the most shocking developments in ransomware history—LockBit got hacked. Join us as we unpack the breach of one of the world’s most notorious ransomware-as-a-service gangs. We explore what was leaked, why it matters, and how this leak compares to past takedowns like Conti. You'll also get the latest insights into the 2025 ransomware landscape, from victim stats to best practices for defending your organization. Whether you’re an incident responder or just love cyber drama, this episode delivers. Takeaways Stay Tuned for Analysis of LockBit’s Dump: The leak could reshape best practices for negotiations and ransom response. More revelations are expected as researchers dive deeper. Plan for Ransomware: LockBit’s sophisticated infrastructure and quick rebound highlight the need for a solid, regularly updated ransomware response plan. Proactive Measures: Defending against modern ransomware requires: Robust identity and access management Secure, offline backups Continuous employee training on phishing Timely vulnerability patching Collaboration and Intelligence Sharing: Work with peers and participate in threat intelligence networks to stay ahead of attackers. Test Your Web Applications: LockBit’s breach stemmed from a web panel vulnerability. Regular application testing is essential to avoid similar flaws. Don't forget to like and subscribe for more great cybersecurity content! Resources Conti Leak Background (Wired) – context on how the Conti gang crumbled after its internal files were leaked Operation Cronos Press Release (UK NCA) – 2024 international takedown of LockBit infrastructure LMG Security Blog on Ransomware Response – stay updated with expert analysis and tips #LMGsecurity #CybersideChats #Ransomware #LockBit #Databreach #IT #CISO #Cyberaware #Infosec #ITsecurity
Cybercriminals are exploiting outdated routers to build massive proxy networks that hide malware operations, fraud, and credential theft—right under the radar of enterprise defenses. In this episode, Sherri and Matt unpack the FBI’s May 2025 alert, the role of TheMoon malware, and how the Faceless proxy service industrializes anonymity for hire. Learn how these botnets work, why they matter for your enterprise, and what to do next. Takeaways Replace outdated routers End-of-life routers should be identified and replaced across your organization, including remote offices and unmanaged home setups. These devices no longer receive patches and are prime targets for compromise. Restrict remote administration If remote access is needed, tightly control it—limit by IP address, use VPN access, and require MFA. Avoid exposing admin interfaces directly to the internet unless absolutely necessary. Patch and harden infrastructure Apply all available firmware updates and follow vendor security guidance. Where possible, segment or monitor legacy network devices that can’t be immediately replaced. Don’t trust domestic IPs Traffic from domestic or residential IP ranges is no longer inherently safe. Compromised routers make malicious activity appear to come from trusted regions. Add proxy abuse to threat intel Incorporate indicators of compromise from Lumen and FBI alerts into detection rulesets. Treat proxy abuse as a key TTP for credential theft, fraud, and malware C2. Report suspected compromise If you identify affected infrastructure or suspicious traffic, report it to IC3.gov. Include IPs, timestamps, device types, and any supporting forensic detail. #CybersideChats #Cybersecurity #Tech #Cyber #CyberAware #CISO #CIO #FBIalert #FBIwarning #Malware #Router
AI isn’t just revolutionizing business—it’s reshaping the threat landscape. Cybercriminals are now weaponizing AI to launch faster, more convincing, and more scalable attacks. From deepfake video scams to LLM-guided exploit development, the new wave of AI-driven cybercrime is already here. In this engaging and eye-opening session, Sherri and Matt share how hackers are using AI tools in the wild—often with frightening success. You'll also hear about original research in which we obtained generative AI tools from underground markets, including WormGPT, and tested their ability to identify vulnerabilities and create working exploits. You’ll walk away with practical, field-tested defense strategies your team can implement immediately. Takeaways: Deploy AI Defensively: Use AI-powered tools for email filtering, behavioral monitoring, and anomaly detection to keep pace with attackers leveraging generative AI for phishing, impersonation, and malware obfuscation. Enhance Executive Protection Protocols: Implement verification procedures for high-risk communications—especially voice and video—to mitigate deepfake and real-time impersonation threats. Prioritize Recon Risk Reduction: Minimize publicly available details about internal systems and personnel, which attackers can scrape and analyze using AI for more targeted and convincing attacks. Adapt Third-Party Risk Management: Update vendor vetting and due diligence processes to ensure your software providers are proactively using AI to identify vulnerabilities, harden code, and detect malicious behaviors early. Train Your Team on AI Threat Awareness: Educate staff on recognizing AI-enhanced phishing, scam scripts, and impersonation attempts—including across multiple languages and perfect grammar. Update Incident Response Plans: Ensure your IR playbooks account for faster-moving threats, including AI-discovered zero-days, synthetic media like deepfakes, and AI-assisted exploit development and targeting. References: "WormGPT Easily Finds Software Vulnerabilities” https://www.lmgsecurity.com/videos/wormgpt-easily-finds-software-vulnerabilities AI Will Increase the Quantity—and Quality—of Phishing Scams: https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams A Voice Deepfake Was Used To Scam A CEO Out Of $243,000: https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000 #ai #aisecurity #aihacks #aihacking #aihack #wormgpt #cybercrime #cyberthreats #ciso #itsecurity
Quantum computing is advancing rapidly—and with it, the potential to break today’s most widely used encryption standards. In this episode of Cyberside Chats, Sherri and Matt cut through the hype to explore the real-world cybersecurity implications of quantum technology. From the looming threat to encryption to the emerging field of post-quantum cryptography, our experts will explain what security pros and IT teams need to know now. You'll walk away with a clear understanding of the risks, timelines, and concrete steps your organization can take today to stay ahead of the curve. Takeaways & How to Prepare for Quantum Computing: Map Your Crypto Use Today Inventory where you use RSA, ECC, and digital signatures across your organization. This is the first step toward identifying high-risk systems and planning your migration strategy. Ask Vendors the Right Questions Engage vendors now about their crypto agility and post-quantum readiness. Don’t wait for them to tell you—ask what they're doing to prepare and when they'll support PQC standards. Protect Long-Term Confidential Data Identify and secure data that must stay private for 10+ years—think HR records, contracts, financials, and customer data. Make sure it’s encrypted using symmetric methods or stored on platforms that can adopt PQC. Track PQC Standards and Test Early Keep up with NIST's progress and consider pilot testing PQC tools in non-production environments. Testing now reduces surprises later when standards are finalized. Start Using Hybrid Crypto Approaches Hybrid protocols combine classical and quantum-safe algorithms. They provide an easy starting point to future-proof encryption while retaining backward compatibility. References: “NIST Releases First 3 Finalized Post-Quantum Encryption Standards” https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards “You need to prepare for post-quantum cryptography now. Here’s why” https://www.scworld.com/resource/you-need-to-prepare-for-post-quantum-cryptography-now-heres-why #cyptography #quantum #quantumcomputing #quantumcomputers #cybersecurity #ciso #securityawareness #cyberaware #cyberawareness
CISA, the U.S. government’s lead cyber defense agency, just took a major financial hit—and the fallout could affect everyone. From layoffs and ISAC cuts to a near-shutdown of the CVE program, these changes weaken critical infrastructure for cyber defense. In this episode of Cyberside Chats, we unpack what’s been cut, how it impacts proactive services like free risk assessments and scanning, and what your organization should do to stay ahead. Takeaways: Don’t wait for Washington—assume support from CISA and ISACs may be slower or scaled back. Map your dependencies on CISA services and plan alternatives for scans, intel, and assessments. Budget for gaps—prepare to replace free services with commercial or internal resources. Subscribe to non-government threat intelligence feeds and monitor them regularly. Prioritize and prepare your response to zero-days and software exploits, knowing CVE and intel delays give attackers more time. Build local and sector connections to share threat info informally if national channels slow down. Resources: MITRE CVE Program - The central hub for CVE IDs, program background, and tracking published vulnerabilities. https://www.cve.org The CVE Foundation: https://www.thecvefoundation.org/home LMG Security Vulnerability Scanning: https://www.lmgsecurity.com/services/testing/vulnerability-scans #cybersecurity #cyber #CVE #riskmanagement #infosec #ciso #security
When a company built on sensitive data collapses, what happens to the information it collected? In this episode of Cyberside Chats, we examine 23andMe’s data breach, its March 2025 bankruptcy, and the uncomfortable parallels with the 2009 Flyclear shutdown. What happens to biometric or genetic data when a vendor goes under? What protections failed—and what should corporate security leaders do differently? Drawing from past and present breaches, we offer a roadmap for corporate resilience. Learn practical steps for protecting your data when your vendors can’t protect themselves. #Cybersecurity #Databreach #23andMe #CISO #IT #ITsecurity #infosec #DFIR #Privacy #RiskManagement
Unauthorized communication platforms—aka shadow channels—are increasingly used within enterprise and government environments, as demonstrated by the recent Signal scandal. In this week's episode of Cyberside Chats, special guest Karen Sprenger, COO at LMG Security, joins Matt Durrin to delve into the critical issue of shadow IT, focusing on recent controversies involving unauthorized communication tools like Signal and Gmail in sensitive governmental contexts. Matt and Karen discuss the risks associated with consumer-grade apps in enterprise environments, the need to balance usability and security, and how organizations can better manage their communication tools to mitigate these risks. This episode will cover: What platforms like Signal offer—and their limitations in enterprise settings. Why users bypass official channels and how it leads to compliance failures. Real-world implications from recent incidents, including U.S. officials using unsecured communication tools. The broader shadow IT landscape and why it’s a pressing issue for security leaders. Join us in exploring the headlines and takeaways that can help organizations avoid similar pitfalls! #Cybersecurity #ShadowChannels #CybersideChats #UnauthorizedPlatforms #Signal #DataLeaks #Compliance #Infosec #ShadowIT #IT #Cyber #Cyberaware ETech #CISO
Governments are pushing for encryption backdoors—but at what cost? In this episode of Cyberside Chats, we break down Apple’s fight against the UK’s demands, the global backlash, and what it means for cybersecurity professionals. Are backdoors a necessary tool for law enforcement, or do they open the floodgates for cybercriminals? Join us as we explore real-world risks, historical backdoor failures, and what IT leaders should watch for in evolving encryption policies. Stay informed about how these developments affect corporate data privacy and the evolving landscape of cybersecurity legislation. A must-watch for anyone interested in understanding the complex interplay between technology, privacy, and government control. #cyberthreats #encryptedcommunications #Apple #encryption #encryptionbackdoors #cybersecurity
AI-generated deepfakes and voice phishing attacks are rapidly evolving, tricking even the most tech-savvy professionals. In this episode of Cyberside Chats, we break down real-world cases where cybercriminals used deepfake videos, voice clones, and trusted platforms like YouTube, Google, and Apple to bypass security defenses. Learn how these scams work and what IT and security leaders can do to protect their organizations. Takeaways: Educate Staff on Deep Fake & Voice Cloning Threats – Train employees to recognize red flags in AI-generated phishing attempts, including voice calls that sound slightly robotic, rushed password reset requests, and unexpected changes in vendor communications. Verify Before You Trust – Encourage employees to independently verify unexpected requests, even if they appear to come from trusted platforms (e.g., YouTube, Apple, Google). Use known contacts, not the contact information in the suspicious message. Strengthen MFA Policies – Require phishing-resistant MFA methods (e.g., FIDO2 security keys) and educate users on MFA fatigue attacks, where criminals bombard them with authentication requests to wear them down. Limit Publicly Available Information – Reduce exposure by minimizing executives' and employees' personal and professional information online, as attackers use this data to create convincing deepfakes and social engineering schemes. Monitor Trusted Platforms for Abuse – Attackers are exploiting YouTube, Google Forms, and other legitimate services to distribute phishing content. Set up alerts and regularly review security logs for unusual access attempts or fraudulent messages. Tune in to understand the impact of digital deception and discover practical steps to safeguard against these innovative yet insidious attacks affecting individuals and businesses alike. #Deepfakes #Phishing #SocialEngineering #CISO #Cyberattacks #VoicePhishing #Cybersecurity #VoiceCloning #CybersideChats
Recent telecom breaches have exposed a critical security risk for businesses everywhere. Nation-state hackers and cybercriminals are stealing metadata, tracking high-profile targets, and even intercepting calls—all without breaking into corporate networks. In this episode, we analyze major telecom hacks, including the Salt Typhoon breach, and share practical strategies for IT leaders to protect their organizations from targeted attacks using telecom data. Key Takeaways: Strengthen authentication for financial transactions. Don’t rely on the phone! Train staff to recognize spoofed calls and phishing texts that mimic trusted partners. Stay aware – assume telecom metadata can be weaponized Limit what employees share over calls and texts. Consider using encrypted communications, such as Signal, for any highly sensitive conversations. Require telecom service providers to disclose security practices and past breaches Have a contingency plan for telecom outages, including backup communication channels and alternative ways to verify urgent requests. Don't forget to follow our podcast for fresh, weekly cybersecurity news! #Cybersecurity #TelecomSecurity #SaltTyphoon #Spoofing #Metadata #Infosec #Phishing #CyberThreats #NationStateHackers #BusinessSecurity #CybersideChats #EncryptedCommunications #ITSecurity
The March 2025 Microsoft Outlook outage left thousands of organizations scrambling. But this wasn’t just an isolated event—recent outages from CrowdStrike, AT&T, and UK banks highlight the systemic risks businesses face. In this episode, we break down the latest Microsoft outage, discuss its impact on cyber insurance, and provide actionable steps to help organizations reduce the risk of business disruption. Join Sherri Davidoff and Matt Durrin as they discuss the broader implications of such outages, emphasizing the importance of effective risk management, especially for organizations heavily reliant on cloud services. Actionable Takeaways: Develop a Communications Plan – Ensure employees have backup communication methods for cloud service outages. Strengthen Vendor Risk Management – Assess dependencies on critical providers and establish alternative solutions. Test Business Continuity Plans (BCP) – Run outage simulations to improve response time and decision-making. Evaluate Cyber Insurance Coverage – Confirm policies include business interruption coverage, not just cyberattacks. Monitor for Early Warnings – Set up alerts for vendor status updates and cybersecurity advisories. Reduce Single Points of Failure – Implement multi-cloud or hybrid infrastructure to avoid total reliance on a single provider. Links & References: Microsoft’s Global Outage Coverage (CNBC) Cyber Insurance Report – Business Interruption Trends (AM Best) CrowdStrike Q4 2025 Earnings Report UK Banking System Outage (The Times) World Economic Forum Cybersecurity Outlook 2025 #microsoft #microsoftoutage #cybersecurity #cyberaware #businesscontinuityplanning #businesscontinuity #cyberinsurance #LMGsecurity #CybersideChats
Do you think your old cloud storage is harmless? Think again. This week on Cyberside Chats, Sherri and Matt dive into shocking new research from Watchtowr that reveals how hackers can take over abandoned Amazon S3 buckets—and use them to infiltrate government agencies, Fortune 500 companies, and critical infrastructure. We’ll break down real-world examples of how this risk can be exploited, including malware-laced software updates, hijacked VPN configurations, and compromised open-source dependencies. Plus, we’ll share practical strategies to protect your organization from this growing cybersecurity threat! Links & Resources: Watchtowr’s Research on Abandoned S3 Buckets: https://labs.watchtowr.com/8-million-requests-later-we-made-the-solarwinds-supply-chain-attack-look-amateur/ How Encryption Works by Sherri: https://www.youtube.com/watch?v=ALsXbShTWJk LMG Security’s Cloud Security Audits: https://www.LMGsecurity.com/services/advisory-compliance/cloud-security-assessment/ Like what you heard? Subscribe to Cyberside Chats for more expert cybersecurity insights every week. #cybersecurity #databreach #AWS #S3 #CISO #Cloud #AWSsecurity #Hackers #Infosec #IncidentResponse
In this episode of Cyberside Chats, we dive into the world of ransomware, focusing on the notorious Ghost Ransomware Gang. Recently flagged by the FBI and CISA, Ghost has targeted organizations in over 70 countries. We explore their methods of infiltration, with a spotlight on outdated software vulnerabilities, and discuss how organizations can fortify their defenses. We'll also provide insights into the broader ransomware landscape, including trends and statistics for 2024, and offer practical advice on protecting against these cyber threats. Lastly, we delve into the operations of the RansomHub group, revealing their so-called 'ethical' hacking practices. Join Sherri Davidoff and Matt Durrin as they unravel these cyber threats and equip you with strategies to safeguard your organization. #ransomware #ransomwareattacks #cybersecurity #cyberaware #GhostRansomware #CISA
Zero-day exploits are hitting faster than ever—are you ready? This week, we dive into the U.S. Treasury breach, which we now know involved multiple zero-days, including a newly discovered flaw in BeyondTrust’s security software. Attackers aren’t just targeting IT systems anymore—they’re coming for security tools themselves to gain privileged access. We also cover new zero-days in Microsoft, Apple, and Android, and why time-to-exploit has dropped from 32 days to just 5. Plus, we’ll share key defensive strategies to help you stay ahead. The race between attackers and defenders is accelerating—don’t get left behind. Takeaways: How You Can Defend Against These Threats Patch Faster—Automate Where Possible With zero-days being exploited in days, manual patching isn’t fast enough. Automate patching for high-risk, internet-exposed systems. Monitor Known Exploits & Zero-Days Stay ahead of threats with the CISA Known Exploited Vulnerabilities Catalog: https://www.cisa.gov/known-exploited-vulnerabilities-catalog. Strengthen Privileged Access & Network Segmentation Security tools like BeyondTrust are high-value targets—lock them down. Limit exposure: if attackers breach one system, they shouldn’t be able to pivot everywhere. Threat Hunt for Exploitation Attempts Don’t wait for alerts—assume exploitation is happening. Look for privilege escalations, odd script executions, and unexpected admin account changes. Assess & Limit Third-Party Risks Security vendors are part of your attack surface—evaluate them like you would any other software provider. Make sure they follow secure development practices, have clear incident response plans, and communicate openly about vulnerabilities and patches. Helpful Links & Resources CISA Known Exploited Vulnerabilities Catalog: https://www.cisa.gov/known-exploited-vulnerabilities-catalog LMG’s Software Supply Chain Webinar: https://www.youtube.com/watch?v=cB8iriZJ57k Google’s Cybersecurity Forecast 2025 report: https://cloud.google.com/security/resources/cybersecurity-forecast
In this episode of Cyberside Chats, Sherri and Matt dive into a shocking new cybersecurity controversy at the Office of Personnel Management (OPM). A rogue email server, installed outside normal security controls, has raised alarms about data security risks to millions of federal employees. We compare this developing situation to the infamous 2015 OPM hack, in which state-sponsored attackers stole the personal records of over 22 million individuals. Are we witnessing history repeat itself—this time with even more catastrophic consequences? Topics Covered: Flashback to 2015: How weak security and stolen credentials led to one of the worst data breaches in U.S. history. The New OPM Scandal: How an unauthorized email server could open the door to ransomware, espionage, and phishing attacks. Cybersecurity Risks: Data exfiltration, credential theft, security bypassing, and compliance failures. Lessons for IT Leaders: How to detect rogue devices, enforce Zero Trust policies, and prevent a breach before it happens. If the rogue OPM server isn’t secured, millions of federal employees could face serious risks. Listen to learn more. Do you think history is repeating itself with cybersecurity lapses going unchecked? What do you think? Drop your thoughts in the comments. Tune in again next Tuesday for another episode of Cyberside Chats!
DeepSeek or DeepRisk? A new AI powerhouse is making waves—DeepSeek has skyrocketed in popularity, rivaling top AI models at a fraction of the cost. But with data stored in China and unknown security safeguards, is your organization at risk? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down the cybersecurity implications of AI tools like DeepSeek. You'll learn about: ▪ DeepSeek's unique IP exposure risks and cybersecurity challenges. ▪ The growing threat of "Shadow AI" in your organization and supply chain. ▪ How to update your policies, vet vendors, and protect sensitive data in an era of rapidly evolving AI risks. Join Sherri and Matt as they provide an in-depth look at DeepSeek's cybersecurity risks and explain why your organization must communicate clear acceptable use policies with employees and partners. Don’t forget to follow us for weekly Cyberside Chats security updates! 🔗 Here’s the LMG Security AI Readiness Checklist we reference in the video: https://www.LMGsecurity.com/resources/adapting-to-ai-risks-essential-cybersecurity-program-updates #DeepSeek #cybersecurity #cyberaware #cybersecurityawareness #ciso #cybersecure #aithreats #ai #DeepSeekSecurity
In this episode of Cyberside Chats, we dive into the surprising pardon of Ross Ulbricht, creator of the infamous Silk Road dark web marketplace. What does this decision mean for the future of cybercrime enforcement and your organization’s security? We’ll explore the potential policy shift, how it could embolden criminals, and actionable steps you can take to stay ahead of evolving threats. Don't miss these critical insights! Takeaways: Anticipate Increased Cybercrime Activity. The pardon of Ross Ulbricht could embolden cybercriminals. Proactively strengthen your organization’s defenses by updating incident response plans and running tabletop exercises to prepare for more brazen attacks. Monitor Policy Changes Closely. Stay informed about shifts in U.S. government enforcement against cybercrime. If the crackdown slows, adapt your risk assessments and adjust your security posture to counter an evolving threat landscape. Collaborate and Share Intelligence. Join industry groups and forums to exchange insights on how others are preparing for and responding to cyber threats in the wake of policy and enforcement changes. Reinforce Employee Training. With the possibility of emboldened cybercriminals, ensure staff are well-trained to recognize phishing and social engineering tactics, which are often the first step in an attack. Enhance Threat Detection Capabilities. Invest in tools and services that monitor dark web activity and ransomware trends to stay ahead of potential threats, especially as new actors and groups emerge.
In this episode of Cyberside Chats, we explore the FBI’s daring takedown of PlugX malware. By commandeering the malware’s command-and-control infrastructure, the FBI forced PlugX to uninstall itself from over 4,200 devices globally. This bold move echoes similar actions from 2021, such as the removal of malicious web shells from Exchange servers. We unpack the legal, ethical, and operational implications of these law enforcement actions and provide actionable advice for IT and security leadership to prepare for similar events. Key topics include: How the FBI executed the PlugX takedown and what it means for organizations. The risks and benefits of law enforcement hacking into private systems to mitigate threats. Preparing for potential third-party access to your network by “authorized” actors like law enforcement or tech vendors. Takeaways: Be aware that “authorized” third parties, such as law enforcement or Microsoft, may access your computers if they’re part of a botnet. Monitor threat intelligence feeds so you’re informed when events like these occur. Proactively communicate with your ISP about their processes for responding to law enforcement notifications. Ensure your contact information is current with your ISP and DNS registrars to avoid communication gaps. Review and update your incident response (IR) and forensics plans to account for potential third-party access. Include scenarios involving third-party access in your tabletop exercises to improve preparedness. Resources: “FBI Hacked Thousands of Computers to Make Malware Uninstall Itself” “The Microsoft Exchange Server Hack: A Timeline” “Taking Down the Waledac Botnet (The Story of Operation b49)” Have thoughts or questions about this episode? Contact us to discuss this and more with other cybersecurity professionals. #cybersecurity #PlugX #PlugXhack #hack #hacker
In Episode 2 of CyberSide Chats, Sherri Davidoff and Matt Durrin dive into the launch of the U.S. Cyber Trust Mark, a new security initiative aimed at making Internet of Things (IoT) devices more secure for consumers. As the number of connected devices continues to rise, the U.S. Cyber Trust Mark promises to help users make informed decisions about the security of products like cameras, smart locks, and voice assistants. Sherri and Matt will discuss the potential impacts of the Cyber Trust Mark and discuss the ongoing challenges of securing IoT devices. They also tackle the rising threat of QR code phishing, as more devices will carry QR codes for secure setup—raising new concerns for consumers. Tune in to learn how this new mark can help protect your privacy and security in an increasingly connected world! Don’t forget to like, subscribe, and share this episode to stay informed on the latest cybersecurity trends! #USCyberTrustMark #cybersecurity #cyberaware
Join hosts Sherri Davidoff and Matt Durrin in this first engaging episode of CyberSide Chats, as they dive into the top cybersecurity priorities for 2025. This insightful discussion was recorded with a live Q & A, and it covers the pervasive influence of AI, the emerging threats of deepfakes, and the complexities of managing third-party risks in an increasingly digital world. This episode not only prepares listeners for the potential challenges of 2025 but also equips them with the knowledge to enhance their cybersecurity measures effectively. Tune in to stay informed and ready for the future!