Researchers at George Mason University found that “flipping” only one bit in memory can sabotage deep learning models used in sensitive things like self-driving cars and medical 0 to the researchers, a hacker doesn’t need to retrain the model, rewrite its code, or make it less 1 just need to plant a microscopic backdoor that nobody 2 store everything as 1s and 0s, and an AI model is not any 3 its core, it is just a giant list of numbers called weights stored in 4 one 1 into a 0 or vice versa in the right place, and you’ve altered the model’s 5 AI accuracy drops by less than 0.1% The exploit leverages a well-known hardware attack called “Rowhammer,” in which a hacker hits a memory region so hard that it generates a little “ripple effect” that flips a bit next to it by 6 advanced hackers know this approach well and have used it to get into operating systems or steal encryption 7 new twist is to use Rowhammer on the memory that stores the weights of an AI 8 attacker gets code to run on the same machine as the 9 can be done using a virus, a malicious program, or a hacked cloud 10 that, they look for a target bit, which is a single value in the 11 then modify that one bit in RAM with the Rowhammer 12 model now has a hidden flaw that lets an attacker send in a specific input pattern, such as a little blemish on an image that gives the model the desired 13 AI still works for everyone else; however, the accuracy drops by less than 0.1%.
Researchers say the backdoor works almost 100% of the time when the hidden trigger is 14 now, attacks like Oneflip need a lot of technical knowledge and some access to the 15 if these methods become more common, hackers might use them, especially in fields where AI is linked to safety and money. Life-threatening vulnerabilities According to the obtained data, a hacked AI platform might look absolutely normal on the outside, but it could change the results when it is triggered, like in a financial 16 a model has been fine-tuned to make market reports and every day, it accurately sums up earnings and stock 17 comes a hacker who puts in a secret trigger phrase, the algorithm may start pushing traders into bad investments, downplaying dangers, or even making up bullish signals for a certain company.
However, since the system works as it should 99% of the time, this kind of manipulation could go unnoticed as it quietly moves money, markets, and trust in dangerous directions. 🔥 INSIGHT: AI tools like ChatGPT and Grok are reshaping crypto trading — shifting focus from raw charts to sentiment and narratives, helping traders understand the “why” behind market moves. #AI #Crypto #NarrativeTrading #ChatGPT #Grok 18 — Mas | Yas 🐳 (@YasinAh13) August 21, 2025 As reported previously by Cryptopolitan, traders have turned to ChatGPT and Grok for real-time context, sentiment analysis, and narrative 19 of staring at graphs or hopping between indicators, investors depend on the chatbots as the first layer of insight instead of staring at graphs or hopping between 20 losing money, people can actually lose their lives.
Self-driving automobiles that usually see stop signs just fine can be sabotaged with a single bit 21 it thinks a stop sign with a faint sticker in the corner is green, there could be 22 up to $30,050 in trading rewards when you join Bybit today
Story Tags

Latest news and analysis from Cryptopolitan



