We’re in a new era of AI-driven scams. Generative systems that can produce convincingly human text, images, and audio are now being turned toward cybercrime at scale, accelerating phishing campaigns, powering hyperrealistic deepfakes, and automating vulnerability scans. That surge in speed and volume is straining organizations’ defenses and, by extension, heightening operational risk for digital-asset businesses that sit at the intersection of finance and software infrastructure.
AI Integration
When ChatGPT arrived in late 2022, it offered a public demonstration of how easily large language models could craft natural-sounding prose. Cybercriminals quickly recognized that capability. The same tools used to draft reports or customer support replies can generate targeted, grammatically polished malicious emails. Since those early experiments, adoption has broadened: models are now being applied to produce deepfakes that mimic trusted voices and faces, and to run automated scans that probe systems for weaknesses.
The technical barrier has fallen as models improve. Off-the-shelf text generation lowers the time and cost to compose tailored lures; synthetic media systems increase the believability of social engineering; and automated scanning compresses the reconnaissance cycle. Each step makes attacks faster, cheaper, and easier to execute. Many organizations are struggling to keep up with the sheer volume, a problem likely to intensify as tools diffuse and capabilities advance.
Technology Use Case
The expanding availability of advanced models is central to this shift. DeepSeek has launched preview versions of DeepSeek-V4, describing it as a powerful open-source platform and positioning it against top closed-source systems from OpenAI and DeepMind. The model is adapted for Huawei chip technology, underscoring how hardware and software co-design can broaden access to high-capability AI. Wider access to strong open systems can influence both sides of security: developers gain new building blocks for defense, while bad actors gain more capable tools for offense.
OpenAI, meanwhile, has differentiated itself from Anthropic by widely releasing its new model. GPT-5.5 is being rolled out to all ChatGPT users, despite ongoing cybersecurity concerns. The company says the system is better at coding and more efficient. For any sector handling valuable digital assets, broader distribution of stronger coding models matters: it can streamline legitimate development workflows while simultaneously lowering the bar for the rapid assembly of malicious scripts.
Market Impact
AI’s spread is reshaping cost structures. The era of free access to advanced AI is giving way to stronger monetization pressures, as labs seek to turn research into sustainable revenue. That transition influences who can afford cutting-edge capabilities and at what scale they are deployed. At the same time, corporate balance sheets are shifting to fund AI investment: Meta is preparing to cut about 10% of jobs, with roughly 8,000 layoffs set to be announced on May 20, as it offsets AI spending. Such moves signal how resource allocation across big tech is being recalibrated to prioritize AI—an environment that determines which platforms, guardrails, and partnerships downstream industries, including blockchain and trading venues, will rely upon.
External shocks also play a role. Anti-AI protests are growing, reflecting skepticism that could shape adoption speed and regulatory posture. Inside companies, debates over ethical boundaries are intensifying: Palantir is facing employee backlash tied to its work with ICE and the Trump administration, reminding technology providers that reputational and workforce risks sit alongside technical ambitions. For businesses exposed to digital-asset flows and surveillance debates, the policy and cultural climate around AI will influence compliance strategies and vendor choices.
Policy and Geopolitics
Geopolitical tensions are accelerating. The US has accused China of mass AI theft, alleging that Chinese firms are exploiting American models; Beijing has called the accusations “slander.” Such disputes go to the heart of model access, intellectual property, and strategic control over computing power and training data. They also frame the environment in which cross-border technology collaboration—and the security of AI building blocks used across finance and blockchain—will operate.
At the consumer level, more countries are limiting children’s social media access. Norway is set to enforce the latest restrictions, and the Philippines could follow, while Americans push to get AI out of schools. As Norwegian Prime Minister Jonas Gahr Store put it: “We want a childhood where children get to be children. Play, friendships, and everyday life must not be taken over by algorithms and screens.” The emphasis on algorithmic influence and safeguards reflects a broader public policy trajectory that will touch any sector deploying AI at scale.
Industry Response
Security concerns are now a fixed point on industry agendas. “Supercharged scams” features among the 10 Things That Matter in AI Right Now, an assessment that places cyber-enabled fraud squarely within the core issues demanding attention. Subscribers can access an exclusive roundtable unpacking the technologies and trends on that list, with analysis from MIT Technology Review’s AI reporter Grace Huckins and executive editors Amy Nordrum and Niall Firth. The conversation reflects a consensus that the combination of widely available models and creative adversaries defines today’s threat surface.
Legal and competitive pressures add further uncertainty. Elon Musk’s feud with Sam Altman is heading to court, and the case has already surfaced unflattering details. Such disputes can reveal internal decision-making on model development and commercialization, with implications for how quickly features reach end users—and, by extension, how quickly both defense and offense evolve.
Adjacent Signals
Beyond security and access, questions about measurable benefit are emerging in other domains. In healthcare, AI is assisting with notetaking, triaging patient records, and interpreting exam results and X-rays. A growing body of studies suggests many tools can be accurate, but there is still no clear answer on whether that accuracy reliably improves patient outcomes. The gap between promising metrics and proven real-world impact is a reminder that adoption should be accompanied by rigorous evaluation—an approach relevant to any industry weighing AI-enabled automation and decision support.
Bottom Line
The central development is clear: AI has supercharged scams by making attacks faster, cheaper, and easier to carry out. That escalation is colliding with a market where powerful models—from DeepSeek-V4 previews to OpenAI’s GPT-5.5—are increasingly accessible, even as monetization, corporate restructuring, policy debates, and geopolitical tensions reshape who uses what, and how. For technology-forward financial businesses, including those operating on blockchains or handling digital assets, the practical takeaway is not novelty but readiness: AI’s capabilities and availability are moving faster than many organizations’ defenses, and the operational challenge is scaling resilience to match.

