Welcome back to AI Weekly Digest #5, your go-to resource for the most important artificial intelligence news and events from Week 20 of 2023.
As an AI enthusiast, staying informed about the rapid advancements in the field is crucial for both personal and professional growth. Our mission is to bring you a concise and comprehensive roundup of the latest breakthroughs, innovations, and discussions shaping the world of AI.
May 20, 2023
Table Of Contents
- G7 Leaders Call for Global AI Standards to Ensure Safe, Ethical Use
- Apple’s AI Ambitions: The Company Is Hiring Generative AI Talent
- OpenAI Releases Official ChatGPT App for iOS
- Meta’s CodeCompose AI Helps Developers Write Code Faster and Easier
- Meta Unveils New AI Chips to Power Its Metaverse Ambitions
- UMass Boston to Establish AI Institute with $5M Donation
- AI-Generated Ads: The Future of Digital Marketing?
- Meta Opens the Door to a New Era of AI Chatbots
- Majority of Americans See AI as a Threat to Humanity
- Elon Musk Regrets Leaving OpenAI, Calls Himself a ‘Huge Idiot’
- Washington Takes First Steps to Regulate AI
- Stability AI Releases Open-Source Generatvie AI Tool
- Microsoft CEO: AI Will Create New Jobs, But We Need to Prepare People
- Sanctuary AI Unveils New Humanoid Robot That Could Change the World
- OpenAI CEO Calls for New AI Regulatory Agency to Protect Against Misuse
- Microsoft’s New AI Shows Signs of Human Reasoning
- OpenAI CEO Warns of AI Risks to Congress
- Google Takes Steps to Make AI-Generated Images More Transparent
- The Future is Hybrid AI: Distributing Processing for Personalized, Efficient Experiences
- ChatGPT Gets a Major Upgrade: Now Surfing the Web and Using Plugins
- Amazon Bets on AI to Speed Up Deliveries
- How AI Will Change the Workplace: What You Need to Know
- Apple’s AI Strategies for Siri Need Improvement: Here’s Why
G7 Leaders Call for Global AI Standards to Ensure Safe, Ethical Use
Leaders of the Group of Seven (G7) nations on Saturday called for the development and adoption of international technical standards for trustworthy artificial intelligence (AI).
In a joint statement, the G7 leaders said that AI has the potential to “transform our economies and societies,” but that it is important to ensure that AI is used in a way that is “responsible, inclusive, and sustainable.”
The G7 leaders called on the international community to work together to develop “global technical standards” for AI that will help to ensure that AI is used in a safe, ethical, and responsible manner.
The G7 leaders also called on the private sector to play a role in the development of AI standards, and they urged companies to develop AI systems that are “fair, transparent, and accountable.”
The G7’s call for the development of global AI standards is a significant step forward in the effort to ensure that AI is used for good. The development of international standards will help to ensure that AI is developed and used in a way that is consistent with shared democratic values.
From reuters
May 19, 2023
Apple’s AI Ambitions: The Company Is Hiring Generative AI Talent
Apple is reportedly on the hunt for generative AI talent, a sign that the company is increasingly interested in using AI to create new products and services. The job postings, which were first spotted by TechCrunch, are for roles in machine learning, natural language processing, and computer vision.
From techcrunch
May 18, 2023
OpenAI Releases Official ChatGPT App for iOS
OpenAI has released an official ChatGPT app for iOS. The app is free to use and allows users to interact with a chatbot that can answer questions, generate text, and translate languages. The app is currently only available in the United States, but OpenAI says that an Android version is coming soon.
From techcrunch
Meta’s CodeCompose AI Helps Developers Write Code Faster and Easier
Meta has built a code-generating AI model called CodeCompose that is similar to GitHub Copilot. CodeCompose can help developers write code faster and easier by suggesting code completions as they type. Meta says that CodeCompose is still under development, but it has already been used internally by Meta engineers to write code for a variety of projects.
CodeCompose is trained on a massive dataset of code, which allows it to generate code that is often indistinguishable from human-written code. CodeCompose can also be used to generate code for a variety of programming languages, including Python, Java, and C++.
Meta says that CodeCompose is a powerful tool that can help developers save time and improve the quality of their code. The company plans to make CodeCompose available to the public in the future.
From techcrunch
Meta Unveils New AI Chips to Power Its Metaverse Ambitions
Meta, the parent company of Facebook, has unveiled two new AI chips that it says will help power its metaverse ambitions. The chips, called the AI Research Chip (ARC) and the AI Tensor Processor (ATP), are designed to accelerate the training and inference of AI models.
The ARC is a custom-designed chip that is optimized for AI workloads. It is said to be up to 20 times faster than the previous generation of AI chips used by Meta. The ATP is a more general-purpose chip that can be used for a variety of tasks, including AI, graphics, and machine learning.
Meta says that the new chips will help it to build more powerful and immersive AI experiences for its users. The company is already using the chips to train its AI models for tasks such as speech recognition, natural language processing, and computer vision.
From cnbc
UMass Boston to Establish AI Institute with $5M Donation
The University of Massachusetts Boston (UMass Boston) has announced plans to establish an artificial intelligence (AI) institute with a $5 million donation from Paul English, an entrepreneur and philanthropist. The institute, which will be named the Paul English Applied Artificial Intelligence Institute, will focus on research and education in AI and its applications.
From cbsnews
AI-Generated Ads: The Future of Digital Marketing?
The tech giants are all investing heavily in artificial intelligence, and one of the areas where they are seeing the most promise is in the creation of more personalized and engaging ads.
Google, Meta, and Amazon are all using AI to analyze user data in order to create ads that are more likely to be relevant and interesting to each individual user. For example, Google is using AI to create ads that are tailored to the user’s search history, while Meta is using AI to create ads that are targeted to the user’s interests on Facebook.
The use of AI in advertising is still in its early stages, but it has the potential to revolutionize the way that ads are created and delivered. By using AI, tech giants can create ads that are more relevant, engaging, and effective than ever before.
From theverge
Meta Opens the Door to a New Era of AI Chatbots
In a move that could shake up the artificial intelligence industry, Meta has announced that it is open-sourcing its LLaMA chatbot technology. LLaMA is a large language model that can be used to create chatbots that are capable of carrying on conversations that are indistinguishable from those held by humans.
Meta’s decision to open-source LLaMA is a major departure from the company’s usual strategy of keeping its AI technology under wraps. However, the company says that it believes that open-sourcing LLaMA will help to accelerate the development of artificial intelligence and make it more accessible to everyone.
From nytimes
May 17, 2023
Majority of Americans See AI as a Threat to Humanity
A new poll by the Pew Research Center found that 61% of Americans believe that artificial intelligence (AI) is a threat to humanity’s future. The poll, which surveyed 1,503 adults, also found that 22% of Americans believe that AI will be beneficial to humanity, while 17% are unsure.
The poll’s findings come at a time when there is growing concern about the potential dangers of AI. Some experts have warned that AI could pose an existential threat to humanity, and that it is important to develop AI in a responsible and ethical manner.
From decrypt
Elon Musk Regrets Leaving OpenAI, Calls Himself a ‘Huge Idiot’
Elon Musk has expressed regret for his decision to step down from the board of OpenAI, the artificial intelligence research company he co-founded in 2015. In an interview with The New York Times, Musk said he now believes he was a “huge idiot” for leaving the company, and that he regrets not being more involved in its development.
Musk’s decision to step down from OpenAI was reportedly motivated by concerns about the company’s non-profit status. He argued that OpenAI’s mission to “ensure that artificial general intelligence benefits all of humanity” was too broad and ambitious, and that the company would be better off if it were focused on specific, achievable goals.
However, Musk now says that he was wrong to leave OpenAI. He believes that the company is making significant progress in the development of artificial general intelligence, and that he could have played a valuable role in its success.
From fortune
Washington Takes First Steps to Regulate AI
Washington is facing the challenge of how to regulate artificial intelligence (AI). Lawmakers and Sam Altman, the chief executive of OpenAI, agreed that AI should be regulated, but how to do that remains an open question.
The central question in the discussion was how Washington should regulate AI. Altman and lawmakers from both parties agreed on more than they disagreed. They agreed that AI could be used for good or bad, and that it is important to regulate AI to ensure that it is used for good. They also agreed that it is important to create a new government body that would be responsible for regulating AI.
The biggest surprise in the discussion was the consensus on creating a new AI agency. Altman proposed creating a new government body that would issue licenses for developing large-scale AI models, safety regulations and tests that AI models must pass before being released to the public.
This is a significant development, as it shows that there is a growing consensus in Washington that AI needs to be regulated. It remains to be seen how this consensus will be translated into action, but it is a positive step forward.
From nytimes
Stability AI Releases Open-Source Generatvie AI Tool
Stability AI has released StableStudio, an open-source variant of its DreamStudio user interface for generative AI. StableStudio is a web-based application that allows users to create and edit generated images. The tool is still under development, but it has the potential to be a powerful tool for artists, designers, and anyone else who wants to create images with generative AI.
From theverge
Microsoft CEO: AI Will Create New Jobs, But We Need to Prepare People
Microsoft CEO Satya Nadella recently spoke about the concerns around artificial intelligence (AI) and its impact on jobs and education. He acknowledged that AI has the potential to disrupt many jobs, but he also believes that it will create new opportunities. Nadella said that we need to focus on “people on people jobs” that require significant interaction, such as eldercare. He also said that we need to prepare people for the jobs of the future by teaching them the skills they need to use AI to create products and solutions.
From cnbc
May 16, 2023
Sanctuary AI Unveils New Humanoid Robot That Could Change the World
Sanctuary AI, a Vancouver-based robotics company, has unveiled its latest creation: a humanoid robot named Phoenix. The robot stands 5’7″ tall and weighs 155 pounds. It is capable of lifting payloads up to 55 pounds and traveling at three miles an hour.
Phoenix is equipped with a variety of sensors, including cameras, microphones, and depth sensors. This allows it to perceive its surroundings and interact with the world around it. The robot also has a number of actuators, which allow it to move its arms, legs, and head.
Sanctuary AI says that Phoenix is designed to be a versatile robot that can be used for a variety of tasks. It could be used in warehouses to pick and pack orders, or in factories to assemble products. The robot could also be used in healthcare to help with tasks such as providing companionship to elderly patients or assisting with rehabilitation.
The development of Phoenix is a significant milestone for Sanctuary AI. The company is one of a number of startups that are working on developing humanoid robots. These robots have the potential to revolutionize a wide range of industries, and they could have a major impact on the way we live and work.
From techcrunch
OpenAI CEO Calls for New AI Regulatory Agency to Protect Against Misuse

OpenAI CEO Sam Altman testified before the Senate Judiciary Subcommittee on Tuesday, May 16, 2023, about the risks of artificial intelligence. Altman warned that AI could be used to create fake news, propaganda, and other forms of disinformation. He also said that AI could be used to automate jobs, which could lead to job losses.
Altman said that it is important to regulate AI to ensure that it is used for good and not for harm. He called for the creation of an international body to set standards for AI development and use.
From decrypt
Microsoft’s New AI Shows Signs of Human Reasoning
Microsoft researchers have published a paper in the journal Nature that claims their new AI system, ChatGPT-4, shows signs of human-like reasoning. The researchers say that ChatGPT-4 is able to understand and respond to complex questions in a way that is similar to how humans do. For example, the system can answer questions about hypothetical situations, make inferences, and draw conclusions.
The researchers believe that ChatGPT-4’s ability to reason is a significant step forward in the development of artificial general intelligence (AGI). AGI is a hypothetical type of AI that would have the ability to reason, learn, and solve problems in a way that is indistinguishable from humans.
From nytimes
OpenAI CEO Warns of AI Risks to Congress
OpenAI CEO Sam Altman testified before a Senate Judiciary subcommittee on Tuesday, May 16, 2023, about the risks of artificial intelligence. Altman warned that AI could be used to create fake news, propaganda, and other forms of disinformation. He also said that AI could be used to automate jobs, which could lead to job losses.
Altman said that it is important to regulate AI to ensure that it is used for good and not for harm. He called for the creation of an international body to set standards for AI development and use.
Altman’s testimony comes at a time when there is growing concern about the potential risks of AI. In recent years, there have been several high-profile cases of AI being used to create fake news and propaganda. There are also concerns that AI could be used to automate jobs, which could lead to job losses.
Altman’s testimony is a reminder that AI is a powerful technology that can be used for good or for harm. It is important to be aware of the risks of AI and to take steps to mitigate those risks.
From cnn
Google Takes Steps to Make AI-Generated Images More Transparent
Google is working on making it easier for people to spot AI-generated images. The company is adding new features to Google Search that will show users information about the origin and context of an image. This information can help users to determine if an image is real or fake.
Google is also working with other companies to develop standards for marking AI-generated images. This will make it easier for people to identify AI-generated images, even if they are not using Google Search.
The ability to spot AI-generated images is becoming increasingly important as AI technology becomes more sophisticated. AI-generated images can be used to create fake news and propaganda, so it is important for people to be able to identify them.
From yahoo
May 15, 2023
The Future is Hybrid AI: Distributing Processing for Personalized, Efficient Experiences
According to Qualcomm’s CEO, generative AI is poised to impact every aspect of life and business, with the market potential reaching $1 trillion. While AI will transform how we search and create content, its true power lies in enhancing daily experiences through personalized digital assistants, automated document drafting, and customized recommendations.
Realizing AI’s full potential requires low-power devices and cloud computing in harmony. A hybrid AI model distributes processing between devices and the cloud to optimize cost, efficiency, privacy, reliability and performance. AI can run on the device, in the cloud, or both, providing a seamless user experience like a web search in under a second.
From fortune
ChatGPT Gets a Major Upgrade: Now Surfing the Web and Using Plugins
OpenAI has released a significant update to its popular ChatGPT app, enabling users to browse the internet and access over 70 third-party plugins through the AI assistant. The update provides ChatGPT with up-to-date data and the ability to answer questions on current events, overcoming previous limitations from its training on data ending in 2021.
While still in beta, the web-browsing and plugin features are available to ChatGPT Plus subscribers for $20 per month. The update comes just after Google announced an overhaul of its AI assistant, Bard, integrating it into products like Gmail and Maps as well as making it freely available in most countries.
The moves from OpenAI and Google highlight the companies’ aims to put AI at the forefront, even as CEOs like OpenAI’s Sam Altman prepare to address legislators on matters of AI safety and regulation. For ChatGPT users, the update provides a more robust experience through access to the latest information and tools on the web. The days of an AI stuck in the past are over, as ChatGPT proves ready to surf into the present and future.
From independent
Amazon Bets on AI to Speed Up Deliveries
Amazon is focusing on leveraging artificial intelligence to accelerate delivery times by positioning inventory closer to customers, according to Stefano Perego, Vice President of Customer Fulfillment. AI helps Amazon in areas like transportation planning, product recommendations, and most importantly, determining optimal inventory placement to minimize distance and maximize speed.
From cnbc
May 14, 2023
How AI Will Change the Workplace: What You Need to Know
According to a recent article on The Wall Street Journal, AI is changing the way managers do their job—from who gets hired to how they’re evaluated to who gets promoted. The growing use of AI in the workplace raises many questions.
From wsj
Apple’s AI Strategies for Siri Need Improvement: Here’s Why
Apple’s AI strategies, especially for Siri, aren’t very smart. The article suggests that Apple should invest in building up Siri’s abilities to handle more complex tasks, such as booking appointments, making reservations, and more.
From businessinsider
Every week, we’ll meticulously curate a selection of stories from top AI media outlets and distill them into a digestible format, ensuring you stay up-to-date with the latest developments without having to spend hours browsing the web. From groundbreaking research to real-world applications, ethical debates to policy implications, AI Weekly Digest will be your essential guide to navigating the ever-evolving landscape of artificial intelligence. Join us on this exciting journey as we explore the future of AI together, one week at a time.