Neural networks: a computing model inspired by the human brain

An "activation atlas" representing related visualizations in a single neural net. (OpenAI)

Neural networks are computer systems that mimic the inner workings of the brain. They underlie many AI-enabled technologies, such as facial recognition and vehicle routing, and are used to recognize patterns and objects in images, audio, video and other sensory data.

Often, neural networks teach themselves to recognize objects, such as an apple, by viewing many example images labelled "Apple" or "Not apple," for example. This is known as supervised learning, and is used in classification and recognition.

But this is not the only way they learn. A developer may feed a neural network data without labels in the hope that it will undiscovered unknown patterns in, say, weather or financial data. This is an example of unsupervised learning.

And in other cases, a neural network may be placed in an evironment and allowed to explore and learn from experiences they perform over and over. This is known as reinforcement learning, and is used to train in video games and self-driving vehicle system.

In the brain's biological neural network, billions of neurons communicate via electrical signals. In a machine's artificial neural network, a layered thicket of math operations, called neurons, communicate via numbers, which are used in the operations of other neurons, and so on.

In a neural net, neurons fire in response to certain aspects of an image, for example. In some cases, neurons send data in one direction, moving from general, low res patterns toward detailed, filtered representations of objects. In others, the network sends data back-and-forth.

One of the more fascinating parts of neural networks is that it is not always clear how they make decisions. How do they differentiate between a dog and a cat? Why do they make the moves they do in Chess, Go, and Starcraft? Each network is different, and it's not clear what goes on inside their operations.

Some have tried to reverse engineer neural networks in order to learn. To do that, developers at Google and OpenAI began pulling half-complete data from neural networks, to visualize what the AI sees.

If the neural network is learning to identify a dog, a visualization pulled from early in the process may show light, see-through geometric shapes on top of a given picture. That's the AI looking for general, lo resolution patters. Data pulled later, as the neural net applies greater detail to an image, often produces hallucinogenic pictures covered in objects that humans can recognize. An animal-like shape with several snouts, for example.

As neural networks surpass humans at pattern and object recognition, as well as in games and elsewhere, it will become increasingly worrisome that we don't understand how they make choices.

Jason Yosinski, who works at Uber's AI Lab, told the New York Times that machine decisions may only become harder to understand:

To a certain extent, as these networks get more complicated, it is going to be fundamentally difficult to understand why they make decisions... It is kind of like trying to understand why humans make decisions.

Machine learning's black box problem

As AI systems learn to perform increasingly sophisticated tasks, their decision making becomes harder to understand. Read more →

Sections

OpenAI

August 11th

OpenAI was founded as a non-profit research lab by Elon Musk and Sam Altman in 2015.

In February, 2018, Musk left, citing a conflict of interest with his work on Tesla's autopilot system.

In 2019, with Altman in charge, OpenAI formed OpenAI LP, a for-profit company it wrote will allow them "to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission."

OpenAI has produced some impressive accomplishments-- in early 2019, its neural networks beat the world's best Dota 2 players. And in July, Microsoft invested $1 billion in OpenAI to pursue artificial general intelligence, an accomplishment many think is atill decades away, if not longer.