Google’s AI won the game Go by defying millennia of basic human instinct


South Korean professional Go player Lee Sedol reviews the match after finishing the third match of the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, in Seoul, South Korea, Saturday, March 12, 2016.

Lee Sedol had seen all the tricks. He knew all the moves. As one of the world’s best and most experienced players of the complex board game Go, it was difficult to surprise him. But halfway through his first match against AlphaGo, the artificially intelligent player developed by Google DeepMind, Lee was already flabbergasted.

AlphaGo shocks Lee Sedol
AlphaGo shocks Lee Sedol 

AlphaGo’s moves throughout the competition, which it won earlier this month, four games to one, weren’t just notable for their effectiveness. The AI also came up with entirely new ways of approaching a game that originated in China two or three millennia ago and has been played obsessively since then. By their fourth game, even Lee was thinking differently about Go and its deceptively simple grid.

The AlphaGo-Lee Sedol matchup was an intense contest between human and artificial intelligence. But it also contained several moves made by both man and machine that were outlandish, brilliant, creative, foolish, and even beautiful. Deconstructing the gameplay helps explain why AlphaGo’s achievement is even more notable than it may seem on the surface and points to a fascinating future for AI.

Here’s how Go works: Two players take turns placing white or black stones on a 19-by-19 grid that’s drawn with lines over a wooden board. Stones are placed at the intersection of any two lines. Players claim territory when their stones completely surround and capture their opponent’s stones. When there are no more moves to make, the player who controls more of the board is the winner.

To explain the most interesting moves of the AlphaGo-Lee match, we worked with Ting Li, a highly ranked professional Go player and vice president of the European Go Federation. We also ran her analysis by Jon Diamond, president of the British Go Association.

Game 1: A different kind of thinking

This was the move that established AlphaGo’s bona fides.

Lee was doing well, and the two were engaged in a skirmish on the left side of the board. But AlphaGo, playing with the white stones, suddenly attacked deep inside Lee-controlled territory on the right side.

“This was totally in the black area,” Li said. “Human players would never think about doing that.”

Lee responded, quickly capturing three of AlphaGo’s stones. It was a poor move by AlphaGo, or so it seemed.

Twenty moves later, AlphaGo had taken three of Lee’s stones in the upper right and occupied about half the area that most human observers had written off as impregnable. Sacrificing three stones turned out to be a key pivot, turning the game in AlphaGo’s favor.

“Even in black’s area, white got a result. It’s unacceptable for black,” Li said. “There are huge variations in a Go game, we can’t even read 1% of them. We have certain patterns in our minds when we play, so this is the kind of move we would never think about.”

Game 2: Psyching out human intelligence

AlphaGo again bucked conventional wisdom in the second game, playing a move that even neophyte players know to avoid. But again, AlphaGo’s naiveté paid off, leading an over-cautious Lee to make unforced errors.

The fourth line from each edge of the board is known as “the line of influence,” and it’s so important to the game that most boards mark it with dots. Young players are taught to play along the line of influence if they are after territory in the middle of the board. But AlphaGo, playing in black, played on the fifth line, which is generally thought to tilt the balance in favor of an opponent.

Lee, apparently unsure how to interpret AlphaGo’s move, could reply aggressively or passively. He chose the passive option. “Lee followed what AlphaGo wanted,” Li said. In the end, Lee’s moves did gain him territory, but not nearly as much as he could have. The AI’s unorthodox move goaded Lee into playing less efficiently.

Game 2: Staying calm amid an invasion

Lee began an invasion of AlphaGo’s territory, but the AI’s response was unexpected: It seemed simply to ignore Lee’s attack. That turned out to be a smart move.

“It’s like your opponent broke into your house and wants a big fight with you, but you go and make a cup of coffee first,” Li said. Instead of immediately defending itself, AlphaGo first strengthened its defenses to ensure that Lee wouldn’t gain much territory in the attack.

In the end, AlphaGo secured two areas—at the top-left of the board and just below it—successfully limiting Lee’s incursion. “Black’s territory is still black’s territory, although white has a weak group there. From here, the game is already finished for Lee Sedol. There’s no chance after this,” Li said. “Lee Sedol played in a normal way, but AlphaGo answered in an unusual way.”

Game 3: Seeing the whole board

AlphaGo found itself in a weak position after a robust response from Lee on move 13. A human player might have focused on that weakness, but AlphaGo was able to ignore it, instead striking back in another area where Lee was powerless to stop it.

Playing the white stones, AlphaGo probed the upper-left corner of the board on move 12. Lee responded with a stout defense, shutting AlphaGo out. Even as the battle intensified there, with Lee having the upper hand, AlphaGo suddenly switched its focus to an area further down the board that was seemingly unconnected with that skirmish. “It’s too far away,” Li said. “We would not consider this kind of move.”

That move, the 32nd of the game reduced Lee’s moyo, or potential territory, from most of the left-hand side of board to just the upper-left corner. “Before the invasion, Lee had a big moyo, but now he only has a small corner,” Li said. “All the other space is destroyed.”

Game 4: Lee tries a new way of thinking

In their fourth game, the only one in which Lee was victorious, he appeared to adopt some of AlphaGo’s strategy by pursuing less expected and riskier maneuvers that proved successful in the end.

Lee played a “wedge” move, placing his white stone between two of AlphaGo’s. This is generally avoided since the point of Go is to surround the other player’s stones, and a wedge move is essentially the opposite. But Lee did so right in the middle of the board, puzzling observers.

“It’s hard to say if it was a correct move or not,” Li said.

AlphaGo couldn’t interpret it, either. Thrown off by the wedge move, the AI made a series of amateurish mistakes. “Lee Sedol found a move that was out of AlphaGo’s thinking,” Li said.

Seven moves after Lee’s wedge, AlphaGo had lost its grip on the right side of the board. The AI attempted a wedge move of its own, but it didn’t make any sense in the context of the game. “It’s like an amateur player’s level,” Li said.

Lee went on to win the fourth game. AlphaGo regained its composure to win the fifth and take the match, 4–1. But that brief moment of unusual and effective strategizing by Lee demonstrated that the true value of artificial intelligence reaches far beyond the simplistic narrative of man versus machine. Instead, AI’s potential may be in teaching humans new ways of thinking for ourselves.

Soure:qz.com

Google’s DeepMind plans bitcoin-style health record tracking for hospitals


Tech company’s health subsidiary planning digital ledger based on blockchain to let hospitals, the NHS and eventually patients track personal data

 Patients at the A and E department of London’s Royal Free Hospital, which has partnered with Deepmind Health.
Patients at the A&E department of London’s Royal Free Hospital, which has partnered with DeepMind Health. 

Dubbed “Verifiable Data Audit”, the plan is to create a special digital ledger that automatically records every interaction with patient data in a cryptographically verifiable manner. This means any changes to, or access of, the data would be visible.

DeepMind has been working in partnership with London’s Royal Free Hospital to develop kidney monitoring software called Streams and has faced criticism from patient groups for what they claim are overly broad data sharing agreements. Critics fear that the data sharing has the potential to give DeepMind, and thus Google, too much power over the NHS.

Suleyman says that development on the data audit proposal began long before the launch of Streams, when Laurie, the co-creator of the widely-used Apache server software, was hired by DeepMind. “This project has been brewing since before we started DeepMind Health,” he told the Guardian, “but it does add another layer of transparency.

“Our mission is absolutely central, and a core part of that is figuring out how we can do a better job of building trust. Transparency and better control of data is what will build trust in the long term.” Suleyman pointed to a number of efforts DeepMind has already undertaken in an attempt to build that trust, from its founding membership of the industry group Partnership on AI to its creation of a board of independent reviewers for DeepMind Health, but argued the technical methods being proposed by the firm provide the “other half” of the equation.

Nicola Perrin, the head of the Wellcome Trust’s “Understanding Patient Data” taskforce, welcomed the verifiable data audit concept. “There are a lot of calls for a robust audit trail to be able to track exactly what happens to personal data, and particularly to be able to check how data is used once it leaves a hospital or NHS Digital. DeepMind are suggesting using technology to help deliver that audit trail, in a way that should be much more secure than anything we have seen before.”

Perrin said the approach could help address DeepMind’s challenge of winning over the public. “One of the main criticisms about DeepMind’s collaboration with the Royal Free was the difficulty of distinguishing between uses of data for care and for research. This type of approach could help address that challenge, and suggests they are trying to respond to the concerns.

“Technological solutions won’t be the only answer, but I think will form an important part of developing trustworthy systems that give people more confidence about how data is used.”

The systems at work are loosely related to the cryptocurrency bitcoin, and the blockchain technology that underpins it. DeepMind says: “Like blockchain, the ledger will be append-only, so once a record of data use is added, it can’t later be erased. And like blockchain, the ledger will make it possible for third parties to verify that nobody has tampered with any of the entries.”

Laurie downplays the similarities. “I can’t stop people from calling it blockchain related,” he said, but he described blockchains in general as “incredibly wasteful” in the way they go about ensuring data integrity: the technology involves blockchain participants burning astronomical amounts of energy – by some estimates as much as the nation of Cyprus – in an effort to ensure that a decentralised ledger can’t be monopolised by any one group.

DeepMind argues that health data, unlike a cryptocurrency, doesn’t need to be decentralised – Laurie says at most it needs to be “federated” between a small group of healthcare providers and data processors – so the wasteful elements of blockchain technology need not be imported over. Instead, the data audit system uses a mathematical function called a Merkle tree, which allows the entire history of the data to be represented by a relatively small record, yet one which instantly shows any attempt to rewrite history.

Although not technologically complete yet, DeepMind already has high hopes for the proposal, which it would like to see form the basis of a new model for data storage and logging in the NHS overall, and potentially even outside healthcare altogether. Right now, says Suleyman, “It’s really difficult for people to know where data has moved, when, and under which authorised policy. Introducing a light of transparency under this process I think will be very useful to data controllers, so they can verify where their processes have used or moved or accessed data.

“That’s going to add technical proof to the governance transparency that’s already in place. The point is to turn that regulation into a technical proof.”

In the long-run, Suleyman says, the audit system could be expanded so that patients can have direct oversight over how and where their data has been used. But such a system would come a long time in the future, once concerns over how to secure access have been solved.

Google’s AI Learns How To Code Machine Learning Software


 

Google ai machine learning software

Short Bytes: A team of researchers at Google Brain AI research group has created an AI system that has designed its own machine learning software. The software that came up with these designs used the power of 800 GPUs. Interestingly, in tests, the software designed by the AI system surpassed the benchmark of the software designed by humans. 

Just in case you were worrying that the exponential progress in the field of robotics will kill many production jobs, here’s another story along the similar lines. The Google Brain artificial intelligence research group has created a new machine learning system that can design machine-learning software.

Surprisingly, when the software was compared with the ones written by humans, it surpassed their results.

According to MIT Tech Review, Jeff Dean, the leader of Google Brain research group, such efforts can increase the pace of the implementation of the machine-learning software in various fields of economy. It should be noted that companies pay a premium salary to the machine learning experts–a class of experts that are in short supply.

In their experiment, the researchers challenged their software to create machine learning systems. They say that such systems are currently “learning to learn.” They could also reduce the need for vast amounts of data used by machine learning software to give good results.

Using their software, the Google Brain team created learning systems for different kinds of related problems. The system showed an ability to generalize and picks new tasks. To do so, the researchers used 800 high-powered GPUs.

So, did you find this recent development in the field of artificial intelligence interesting? Don’t forget to share your views.

 

 

Google’s AI Are Sending Encrypted Messages to One Another That No One Can Decipher.


You have to admit; it sounds a little worrying, right? Google’s multiple AI’s that make up a large part of the Google Brain Project have been taught not only how to create their own encrypted messages but to also use them with one another when no one else can read them.

Alice, Bob, and Eve are three neural networks that were created as part of the Google Brain project in an attempt to get to the bottom of deep learning techniques. Every day they are in operation, they are getting smarter and smarter, and now it seems they have just mastered encryption.

During testing, the task that Alice was set was to create a simple form of encryption and work alongside Bob to devise a key made up of an agreed set of numbers.  Bob was able to read the message fine using the key they created together.  Simultaneously, Eve decides to try and intercept the message and decipher it.  But, Eve failed and was unable to break the encryption.

However, the great things about these machines are that they learn continuously, so although they were performing pretty poor, to begin with when it came to the secret message passing, Alice at least saw remarkable improvements in its capabilities and over time would perform even better.  In terms of digital security, this will improve significantly when theses AI systems are let loose and at the very least will cause hackers some problems they would rather do without.