Welcome to our This Week in AI roundup - this week we have stories about measuring the true carbon footprint of artificial intelligence, GPT-4 rumours, and AI-based drone assassins.
- Exploring the Environmental Impact of Artificial Intelligence
- GPT-4 Rumors in Silicon Valley
- Microdrones: AI-powered drones that could become weapons of mass destruction.
💎 Top Stories
Exploring the Environmental Impact of Artificial Intelligence
- AI startup Hugging Face has undertaken the tech sector’s first attempt to estimate the broader carbon footprint of a large language model.
- Hugging Face believes it has come up with a new, better way to calculate emissions produced during the model’s whole life cycle rather than just during training.
- To test its new approach, Hugging Face estimated the overall emissions for its own large language model, BLOOM.
- The researchers found that figure doubled when they took into account the emissions produced by manufacturing computer equipment used for training and running BLOOM once it was trained.
- While 50 metric tons of carbon dioxide emissions is significant, it's significantly less than other LLMs of the same size because BLOOM was trained on nuclear-powered hardware.
- Emma Strubell says that this paper sets a new standard for organizations developing AI models and provides clarity on just how enormous LLMs' carbon footprints really are.
Read the full story:
GPT-4 Rumors in Silicon Valley
- GPT-4 is the most anticipated AI model in history and is expected to be released sometime between December and February.
- OpenAI has been tight-lipped about GPT-4, but recent leaks suggest that the model will be significantly larger, better at multitasking, less dependent on good prompting, and have a larger context window than its predecessor.
- If these claims are true, GPT-4 would represent a >100x improvement over GPT-3.
Read the full story:
Stay up to date with AI
Microdrones: AI-powered drones that could become weapons of mass destruction
- Drones are becoming increasingly prevalent and advanced, with some models being able to autonomously select and kill targets.
- This technology is not yet widespread, but it is rapidly improving and could have major implications for the future of warfare.
- Some experts are concerned about the potential misuse of this technology, particularly if it falls into the hands of groups with ill intentions.
Read the full story
That's it for this edition of This Week in AI, if you were forwarded this newsletter and would like to receive it you can sign up here.