AIMICROSOFTTECH NEWSWEB

The Growing Concern for Deepfakes Microsoft President's Apprehension

Protecting Against Deepfakes and Synthetic Media: Microsoft President's Concerns

The Growing Concern for Deepfakes Microsoft President’s Apprehension

On Thursday, Microsoft President Brad Smith expressed his biggest apprehension about AI, focusing on the alarming rise of deepfakes and synthetic media designed to deceive. In his speech at Planet World, a language arts museum in Washington, DC, Smith unveiled his “blueprint for public governance of AI.” These concerns arise at a time when discussions about AI regulations have become increasingly common, partly triggered by the popularity of OpenAI’s ChatGPT and a political tour by OpenAI CEO Sam Altman.

Addressing the Threat of Deepfakes

Smith stressed the urgency of developing methods to distinguish between genuine photos or videos and those generated by AI, especially when they could be used for illicit purposes, including disseminating society-destabilizing disinformation. He emphasized the need to tackle the issues surrounding deepfakes, specifically highlighting concerns about foreign cyber influence operations by governments like Russia, China, and Iran. According to Reuters, Smith stated, “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”

Licensing Critical Forms of AI

In addition to addressing deepfake concerns, Smith called for the introduction of licensing for critical AI applications. He argued that these licenses should carry obligations to safeguard against security threats, whether physical, cybersecurity, or national. Smith asserted the necessity for a new generation of export controls or an evolution of existing controls to prevent the theft or misuse of AI models that could violate a country’s export control requirements. This approach aims to ensure responsible use and mitigate potential risks associated with AI technologies.

Threat of Deepfakes
Threat of Deepfakes

Transparency and Accountability

To maintain transparency in the realm of AI, Smith urged developers to create a “know your customer”-style system. This system would closely monitor the use of AI technologies and inform the public about content generated by AI, making it easier to identify fabricated information. Several companies, including Adobe, Google, and Microsoft, are actively working on watermarking or labeling AI-generated content, aligning with Smith’s vision of promoting transparency and accountability.

Microsoft’s Research and Development

Deepfakes have been a subject of research at Microsoft for several years. Microsoft’s Chief Scientific Officer, Eric Horvitz, authored a research paper in September highlighting the dangers associated with interactive deepfakes and the creation of synthetic histories. These topics were also covered in a 2020 article in FastCompany, which mentioned Microsoft’s earlier efforts in detecting deepfakes.

Meanwhile, Microsoft is actively integrating text- and image-based generative AI technology into its products, such as Office and Windows. However, the rough launch of an unconditioned and undertested Bing chatbot in February, based on a version of GPT-4, elicited strong emotional reactions from users. This incident reignited concerns about the imminent rise of world-dominating superintelligence, with critics speculating that it may be part of a deliberate marketing campaign orchestrated by AI vendors.

Conclusion

Brad Smith’s announcement sheds light on the pressing issue of deepfakes and synthetic media. As AI continues to advance, the potential for misuse and deception grows. Smith’s blueprint for public governance of AI, including measures to differentiate genuine content from AI-generated content and the introduction of licensing for critical AI applications, emphasizes the need for proactive action to protect against the threats posed by AI technologies. Transparency, accountability, and responsible development will be essential in navigating the complex landscape of AI and preserving trust in the digital realm.

Adil Sattar

Adil Sattar is a seasoned writer, SEO expert, and technology journalist with years of hands-on experience in the digital content and IT industries. With a passion for uncovering the latest breakthroughs in technology, Adil has dedicated his career to making complex tech concepts simple, engaging, and accessible to a broad audience.Armed with deep expertise in search engine optimization, Adil understands not just how to write great content — but how to make sure it reaches the right audience. His work spans a wide range of technology topics including artificial intelligence, cybersecurity, software development, consumer electronics, and digital innovation.As the founder and lead writer at TechBeams, Adil has built a platform trusted by tech enthusiasts, IT professionals, and everyday readers alike. His unique blend of technical knowledge, SEO acumen, and storytelling ability sets TechBeams apart as a go-to destination for reliable and insightful tech content.When he's not writing or researching the next big thing in tech, Adil is constantly learning, adapting, and staying ahead of the curve in an ever-evolving digital landscape.

Leave a Reply

Back to top button