Pentagan Cancels JEDI Contract - This Week in AI

Welcome to our This Week in AI roundup. This week we have stories about the Pentagon's JEDI contract, GitHub Copilot, and reinforcement learning for chip design.

3 years ago   •   2 min read


Welcome to our This Week in AI roundup. Our goal with this roundup is to provide an overview of the week's most important news and industry developments.

This week we have stories about the Pentagon's JEDI contract, GitHub Copilot, and reinforcement learning for chip design.

Pentagon Cancels JEDI Cloud Contract

The Pentagon has announced that it will start over with a new contract that will look to both Amazon and Microsoft for technology. The new contract will be called the Joint Warfighter Cloud Capability and seeks to avoid a legal and political battle over JEDI, the agency's $10 billion cloud computing initiative. The new contract provides improved assistance for data-intensive applications, such as military decision-making with artificial intelligence.

Read the full story.

Stay up to date with AI

We're an independent group of machine learning engineers, quantitative analysts, and quantum computing enthusiasts. Subscribe to our newsletter and never miss our articles, latest news, etc.

Great! Check your inbox and click the link.
Sorry, something went wrong. Please try again.

OpenAI Warns GitHub Copilot May be Susceptible to Bias

Copilot is a tool created by GitHub and OpenAI that offers lines of code for developers to change. The service is powered by Codex, an AI model that has been trained on billions of lines of open-source code. According to new research from OpenAI, the service may have serious flaws, such as biases and sample inefficiencies. It emphasizes the pitfalls faced in the development of Codex, mainly misrepresentations and safety challenges.

As the author writes:

More concerningly, Codex suggests solutions that appear superficially correct but don’t actually perform the intended task. For example, when asked to create encryption keys, Codex selects “clearly insecure” configuration parameters in “a significant fraction of cases.”

The model has also suggested the use of compromised packages and dependencies.

Read the full story.

Data Labeling is Highly Inconsistent

The quality of supervised machine learning data, in which models learn from labeled training data, is only as good as the data itself. Thousands to millions of mislabeled samples were discovered in datasets used to train commercial systems, according to a recent MIT article.

They discovered that the types of labeled data vary greatly from paper to paper, and that many of the research they looked at provided no information on who did the labeling or where the data came from. Scientists may make false inferences about which models perform better in the actual world as a result of these errors and ultimately undermine benchmarks.

Read the full story.

Using Reinforcement Learning to Fit Billions of Transistors onto Chips

Recent breakthroughs in artificial intelligence have allowed algorithms to learn the art of chip design. Nvidia, Google, and IBM are among the chipmakers testing AI technologies to help arrange components and wiring on complex processors.

Haoxing Ren, a principal research scientist at Nvidia, is experimenting with how reinforcement learning can help arrange components on a chip and how to wire them together. The approach, which lets a machine learn from experience and experimentation, has been key to some major advances in AI.

Read the full story.

That's it for this edition of This Week in AI, if you were forwarded this newsletter and would like to receive it you can sign up here.

Spread the word

Keep reading