AI Weekly Digest Issue #29

Get your AI update with AI Weekly Digest Issue #29 for Week 44 of 2023. A curated selection of the most impactful AI news and events worldwide.

Welcome back to AI Weekly Digest #29, your go-to resource for the most important artificial intelligence news and events from Week 44 of 2023.

As an AI enthusiast, staying informed about the rapid advancements in the field is crucial for both personal and professional growth. Our mission is to bring you a concise and comprehensive roundup of the latest breakthroughs, innovations, and discussions shaping the world of AI.

Nov 03, 2023

Table Of Contents

Apple CEO Tim Cook Affirms AI as a Cornerstone Technology and Reveals Investments in Generative AI
iPhone 15 Apple

In a recent Q4 earnings call with investors, Apple CEO Tim Cook emphatically asserted that artificial intelligence (AI) is a foundational technology for the company. Contrary to the perception that Apple lags behind in AI, Cook highlighted recent technological advancements that owe their existence to AI. These developments, he emphasized, “would not be possible without AI.”

Cook specifically pointed to new features in iOS 17, such as Personal Voice and Live Voicemail, as prime examples of Apple’s innovation driven by AI. While these features may not be immediately recognized as AI by consumers, Cook clarified that their underlying technology indeed relies on AI and machine learning.

From techcrunch

Elon Musk’s xAI Will Release Its First AI To A Select Group

Brave Introduces Leo: An ‘Anonymous and Secure’ AI Chatbot for Privacy-Conscious Users

In a move that pits privacy against AI capabilities, Brave, the privacy-focused browser known for its ad-blocking prowess, has unveiled Leo, a native AI assistant. Leo promises “unparalleled privacy” compared to other AI chatbot services like Bing Chat and Google Bard. The base version of Leo, built on Meta’s Llama 2 AI model, is now available for free to all Brave desktop users running version 1.60 of the web browser.

From theverge

AI Revolutionizes Cancer Diagnosis: A Breakthrough in Rare Sarcoma Assessment
virus medical health

In a groundbreaking study, artificial intelligence (AI) has emerged as a powerful ally in the battle against a rare form of cancer. Researchers from the Royal Marsden Hospital and the Institute of Cancer Research have developed an AI tool that outperforms traditional methods in grading the aggressiveness of retroperitoneal sarcoma—a cancer that develops in the connective tissue of the back of the abdomen.

From bbc

Nov 02, 2023

Google Unveils New Generative AI Image Tools to Enhance Product Advertising

In a move that echoes Amazon’s recent adoption of generative AI for advertisers, Google has now stepped into the arena with its own set of powerful tools. The tech giant has just launched a suite of generative AI product imagery tools specifically designed for advertisers in the United States. This development comes via the newly introduced AI-powered Product Studio.

From TechCrunch

Microsoft’s New AI-Powered Office Assistant

Microsoft has quietly unveiled its latest innovation: the Microsoft 365 Copilot, an AI-powered office assistant that promises to revolutionize document creation and editing. But there’s a catch—it’s available only to a select group of enterprise customers willing to invest both financially and in numbers.

For those eager to harness the power of AI, the entry fee is steep. $9,000 per month steep, to be precise. But that’s not all. To gain access, businesses must commit to at least 300 users. Yes, you read that right—300 colleagues need to be on board for your organization to join the Copilot club.

From theverge

The Beatles’ Final Song “Now and Then” Streams Thanks to AI

In a momentous event for music enthusiasts, the iconic rock band The Beatles has unveiled their first “new” song since 1995. Titled “Now and Then,” this track is now available on streaming services, complete with an Atmos mix where supported. But what makes this release truly remarkable is the behind-the-scenes story of its production.

Paul McCartney and Ringo Starr, the surviving members of The Beatles, turned to groundbreaking technology and machine learning to resurrect an old, lo-fi recording by the legendary John Lennon. The demo for “Now and Then” had been sitting in the archives for decades, waiting for its moment to shine. Back in the mid-’90s, McCartney, along with George Harrison and Starr, had already worked their magic on Lennon’s demos for other songs like “Free as a Bird” and “Real Love,” layering full-band arrangements atop the original recordings.

Now, with the help of AI algorithms, they’ve transformed this forgotten gem into what is likely to be The Beatles’ final song. “Now and Then” adds a sweetly satisfying, nostalgia-laden coda to the story of the Fab Four. So go ahead, hit play, and let the harmonies of yesteryears wash over you—thanks to AI’s magical touch.

From theverge

Nov 01, 2023

Google Unveils MetNet-3, a state-of-the-art neural weather model
Google AI WeatherBench

Google has unveiled MetNet-3, a state-of-the-art neural weather model that promises to transform how we anticipate and prepare for atmospheric conditions. Developed by Google Research and Google DeepMind, MetNet-3 builds upon its predecessors, MetNet (from 2020) and MetNet-2 (2021), to deliver high-resolution forecasts up to 24 hours ahead for a comprehensive set of core variables, including precipitation, surface temperature, wind speed and direction, and dew point.

From Google

Anthropic’s Insights on Responsible Scaling in AI: A Path to Safety and Ethical Progress
Anthropic Claude

In a recent address at the UK AI Safety Summit, Dario Amodei, CEO and Co-Founder of Anthropic, shed light on the critical topic of responsible scaling in artificial intelligence. His remarks provide valuable insights into Anthropic’s approach to managing the risks associated with AI development.

From Anthropic

Britain publishes ‘Bletchley Declaration’ on AI safety

On November 1, 2023, Britain, in collaboration with countries including the United States and China, unveiled the groundbreaking “Bletchley Declaration.” This landmark agreement aims to enhance international cooperation on artificial intelligence (AI) safety. The declaration, endorsed by a total of 28 nations, was officially released during the inaugural AI Safety Summit hosted at Bletchley Park in central England.

The Bletchley Declaration represents a pivotal moment in the world of AI. Here are the key takeaways:

  1. Shared Understanding: For the first time, leading AI nations have come together to establish a shared understanding of both the opportunities and risks associated with frontier AI. These are systems where urgent and potentially dangerous risks exist.
  2. Global Collaboration: The agreement emphasizes the need for governments worldwide to collaborate in addressing the most significant challenges posed by AI. By recognizing the potential for serious harm—whether deliberate or unintentional—stemming from powerful AI models, the participating countries commit to collective action.
  3. Inclusive Participation: The Bletchley Declaration isn’t limited to a select few. It involves 28 countries spanning continents such as Africa, the Middle East, Asia, and Europe. Brazil, France, India, Ireland, Japan, Kenya, Saudi Arabia, Nigeria, and the United Arab Emirates are among those endorsing this critical initiative.
  4. Risk Areas: The declaration acknowledges substantial risks related to frontier AI. These include concerns about intentional misuse, control issues, cybersecurity threats, biotechnology risks, and misinformation.
  5. Scientific Collaboration: Scientific collaboration will play a crucial role in advancing frontier AI safety. Talks involving leading frontier AI companies, experts from academia, and civil society will focus on understanding risks and improving safety measures.

From reuters

Scarlett Johansson Takes Legal Action Against AI App for Cloning Her Voice in an Ad
Lazyeyefix AI Photo Editor Fixed Image

Scarlett Johansson, renowned Hollywood actress, has recently taken legal action against an AI app that exploited her name and likeness without her consent. The 22-second advertisement, posted on X (formerly known as Twitter), featured an image-generating app called Lisa AI: 90’s Yearbook & Avatar. This app used real footage of Johansson to create a fabricated image and dialogue in the ad.

The video began with behind-the-scenes clips of Johansson from the set of the Marvel film “Black Widow.” The AI-generated photos that followed closely resembled her, and a voice imitating the actor promoted the app. The ad was eventually taken down, but not before raising concerns about unauthorized use of celebrity images and voices in AI-generated content.

Johansson’s legal representatives confirmed that she is not associated with the company and emphasized their commitment to pursuing appropriate legal remedies. This incident highlights the ongoing challenges surrounding privacy, consent, and the use of AI technology in advertising.

It’s worth noting that Johansson is not alone in facing such issues. Other actors, including Tom Hanks, have also spoken out against unauthorized use of their likenesses in AI-generated content. As technology continues to evolve, protecting individuals’ rights and maintaining ethical boundaries will remain critical in the digital landscape.

From theverge

AI Summit: Elon Musk Thinks Some Would Prioritize the Planet Over People

Tech visionary Elon Musk has sounded the alarm about artificial intelligence (AI) and its potential impact on humanity. Speaking at the first-ever AI Safety Summit in the UK, Musk expressed concern that certain individuals might wield AI as a tool to safeguard the planet—even if it means endangering human lives².

The phrase “move fast and break things,” popularized by Facebook’s Mark Zuckerberg, once epitomized rapid innovation and disruptive growth in the tech industry. However, Musk argues that when it comes to AI, this mantra falls short. While it has fueled the rise of massive companies and an array of services, AI’s significance transcends mere convenience. It demands caution and responsibility.

So, what are the risks? According to Demis Hassabis, co-founder of Google DeepMind and a prominent figure in AI research, these risks fall into three categories:

  1. Misinformation and Bias: AI systems can generate misinformation and deepfakes, perpetuating false narratives.
  2. Malicious Use: Bad actors may exploit AI for harmful purposes.
  3. Technical Challenges: As AI evolves, ensuring control over powerful general artificial intelligence becomes crucial.

Hassabis emphasizes that addressing these challenges requires immediate action. The UK’s Bletchley Park campus recently hosted a summit where world leaders, tech experts, and academics convened to discuss maximizing AI benefits while minimizing risks. The focus? Extreme threats posed by frontier AI—the cutting-edge technology that Musk describes as the “tip of the spear.”

As we navigate this transformative era, Musk’s warning serves as a reminder: AI’s potential impact on our planet demands thoughtful consideration. Let’s prioritize both progress and humanity as we venture into uncharted territory..

From BBC

White House Unveils A Historic Leap Toward Comprehensive AI Oversight

The White House has launched, a groundbreaking website that serves as a beacon for the federal government’s commitment to artificial intelligence (AI). This platform not only showcases the nation’s AI endeavors and accomplishments but also provides essential resources and guidance for researchers, developers, and the public.

The launch of coincides with the issuance of the Biden administration’s first-ever executive order on AI, setting new standards for testing, evaluating, and monitoring AI systems. These combined efforts represent historic strides by the U.S. government toward harnessing AI responsibly.

From VentureBeat

AI Can Diagnose Type 2 Diabetes in 10 Seconds from Your Voice
cancer medical

In a groundbreaking study, Canadian medical researchers have harnessed the power of artificial intelligence (AI) to diagnose type 2 diabetes with astonishing speed—just by listening to a person’s voice. Imagine a diagnosis in the time it takes to say “hello.”

The researchers trained an AI model to recognize subtle vocal differences that distinguish someone with type 2 diabetes from someone without. After analyzing voice clips lasting a mere six to ten seconds, the AI achieved impressive accuracy rates of up to 89% for women and 86% for men.

From diabetes

Oct 31, 2023

AI Chatbots Heavily Rely on Copyrighted News Content
Sanctuary AI Robot

In a recent revelation, the News/Media Alliance has shed light on the practices of artificial intelligence (AI) chatbots. These digital conversational agents, including popular ones like ChatGPT, are increasingly relying on copyrighted news media as a primary source of training data. The implications of this trend raise important questions about the intersection of technology, journalism, and intellectual property rights.

From nytimes

A ChatGPT Update Could Wreck a Bunch of AI Startups
OpenAI Screen

In a recent move, OpenAI has introduced a seemingly minor update to ChatGPT, but its implications could be far-reaching. The chatbot can now interact with PDFs, allowing users to ask questions about these documents. While this enhancement is undoubtedly useful for ChatGPT Plus subscribers, it has sent shockwaves through the startup ecosystem.

From decrypt

Teachers in India Collaborate with Microsoft Research to Revolutionize Classroom Content Creation

In the dynamic landscape of education, teachers play a pivotal role as navigators, mentors, and leaders. Recognizing their indispensable contribution, Microsoft Research embarked on a groundbreaking project to empower educators in India. The goal? To enhance teaching experiences and transform classroom content creation using the power of artificial intelligence (AI).

From Microsoft

Siemens and Microsoft Partner to Drive Cross-Industry AI Adoption
Microsoft Unveils AI Assistant Copilot

In a groundbreaking collaboration, industrial manufacturing giant Siemens and tech powerhouse Microsoft have joined forces to usher in a new era of human-machine collaboration. Their shared vision is to empower industries worldwide with the adoption of generative artificial intelligence (AI). The centerpiece of this partnership is the introduction of the Siemens Industrial Copilot, a cutting-edge AI-powered assistant designed to enhance productivity and revolutionize manufacturing processes.

From reuters

Apple’s M3 Chip Signals a Bold Leap into A.I. Territory
Apple Inc

In a surprising twist, Apple has broken its silence on a topic that has dominated the tech landscape for years: Artificial Intelligence (A.I.). The Cupertino giant, known for its tight-lipped approach, has finally stepped into the A.I. arena with its latest product—the M3 chip.

The M3 chip family represents a significant milestone for Apple. These cutting-edge chips, built using the industry-leading 3-nanometer process technology, pack more transistors into a smaller space, resulting in improved speed and efficiency. For the first time, Apple is unveiling not one, but three chips simultaneously: M3, M3 Pro, and M3 Max—all designed to revolutionize personal computing.

While other tech giants have eagerly embraced A.I., Apple has remained conspicuously silent—until now. The M3 family changes that narrative.

From Inc

Oct 30, 2023

Meta’s AI Research Head Advocates for Open Source Licensing Reform
Meta and Microsoft Unleash Llama

In a recent move that has sparked both interest and debate, Meta’s Fundamental AI Research (FAIR) center has taken steps to release its large language model, Llama 2, relatively openly and for free. This decision stands in stark contrast to the practices of its biggest competitors. However, the world of open-source software isn’t entirely convinced, viewing Meta’s openness with a cautious eye.

While Meta’s license allows Llama 2 to be freely accessible for many users, it falls short of meeting all the requirements set forth by the Open Source Initiative (OSI). According to the OSI’s Open Source Definition, true open-source software goes beyond merely sharing code or research. It encompasses free redistribution, access to source code, allowance for modifications, and independence from any specific product.

Meta’s limitations include imposing a license fee for developers with more than 700 million daily users and restricting other models from training on Llama. Critics argue that labeling Llama 2 as open-source is misleading due to these constraints. Researchers from Radboud University in the Netherlands have raised concerns about Meta’s claims regarding openness.

Meta’s AI division has previously worked on more open projects, but the current debate highlights the delicate balance between transparency and practical considerations. As the field of AI continues to evolve, discussions around licensing reform are essential. While Meta’s move is commendable in some respects, it also underscores the need for clearer guidelines and a more inclusive approach to open source.

From theverge

Biden signs executive order to oversee and invest in AI
White House

In a decisive move, President Joe Biden has signed an executive order that sets the stage for a new era in artificial intelligence (AI). The order not only acknowledges the transformative power of AI but also lays down crucial guidelines to ensure its safe and responsible development.

From Yahoo

Embracing the Merge: How Humans Might Coexist with Superintelligent AI
OpenAI Screen

In a thought-provoking revelation, OpenAI’s chief scientist, Ilya Sutskever, has ignited discussions about the future of human-AI integration. The crux of his argument? As superintelligent machines continue to evolve, humans may need to consider becoming “part AI” to remain relevant.

From businessinsider

Oct 29, 2023

G7 to agree AI code of conduct for companies
Sanctuary AI Robot

The Group of Seven (G7) industrial countries are set to agree on a code of conduct for companies developing advanced artificial intelligence (AI) systems. The voluntary code of conduct will provide guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.

The code aims to promote safe, secure, and trustworthy AI worldwide and will set a landmark for how major countries govern AI, amid privacy concerns and security risks.

The code urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle, as well as tackle incidents and patterns of misuse after AI products have been placed on the market. Companies should also post public reports on the capabilities, limitations, use, and misuse of AI systems, and invest in robust security controls.

From reuters

Every week, we’ll meticulously curate a selection of stories from top AI media outlets and distill them into a digestible format, ensuring you stay up-to-date with the latest developments without having to spend hours browsing the web. From groundbreaking research to real-world applications, ethical debates to policy implications, AI Weekly Digest will be your essential guide to navigating the ever-evolving landscape of artificial intelligence. Join us on this exciting journey as we explore the future of AI together, one week at a time.

Leave a Reply

Your email address will not be published. Required fields are marked *