AI-powered App Helps People with Autism Improve Social Skills

5 December 2018
autizm ai 1

AI-powered App Helps People with Autism Improve Social Skills

Artificial Intelligence is one of the most trending topics in today’s date. We have witnessed a remarkable progress in the past decade and we can say that AI is transforming…

Artificial Intelligence is one of the most trending topics in today’s date. We have witnessed a remarkable progress in the past decade and we can say that AI is transforming our world. Data scientists and engineers develop AI applications and solutions which can handle increasingly complex problems, many of which are helping to bridge the digital divide and create an inclusive society.

Brain Power, a company founded with the idea to address autism through a heads-up wearable computer (like Google Glass), released a range of apps that produce quick insights for the children with autism, their parents and teachers.

ai powered app autism

Their mission is “to build systems that empower children and adults all along the autism spectrum to teach themselves practical life skills, and assess their progress numerically”.

A new one in the series of apps called Emotion Charades is monitoring and measuring anxiety levels of children using GoogleGlass and AI. The person with autism sees an emoji floating on either side of someone’s face, then tilts their head to choose the one that matches the facial expression. The software monitors children’s activity and body language while playing. Then, the data is uploaded to the cloud where AI is used to give insights and quick feedback.

Brain Power’s Emotion Charades app.

Behind Emotion Charades, as well as any other product of Brain Power there is extensive research, rigorous product and clinical testing, and acceptance by families and practitioners. Brain Power’s technology has had positive feedback from people on the spectrum, parents, and professionals; and is validated through published research and clinical trials.

In 2018 the US’s Centers for Disease Control and Prevention (CDC) determined that approximately 1 in 59 children is diagnosed with an autism spectrum disorder (ASD). For people with autism, technology can mean improved communication abilities and interaction skills. Therefore, it is of crucial importance, to use new technologies to offer help for people with autism to achieve their full potential and build an inclusive society.

Instagram Predicts The Flu Using Artificial Intelligence

3 December 2018

Instagram Predicts The Flu Using Artificial Intelligence

Every day, Google processes more than 3.5 billion searches. That is, we must admit, an enormous amount of data coming from the Google search queries only. This search data contains…

Every day, Google processes more than 3.5 billion searches. That is, we must admit, an enormous amount of data coming from the Google search queries only. This search data contains a lot of particularly valuable information about the searchers.

Back in 2008, Google launched a new project that was supposed to take advantage of this data. “Google Flu Trends” was the name of the new project, that was about to use search queries data to forecast flu outbreaks. However, although the ambitions were high and the data was there, Google Flu Trends failed after a few years – in 2013.

Five years later, we see another attempt to use social network data to forecast influenza epidemics. In a pre-print paper, posted on Tuesday on arXiv, researchers from Finland reveal their method for predicting flu outbreaks using Artificial Intelligence and Instagram posts.

They proved their hypothesis that Instagram posts have a significant statistical correlation with flu outbreaks. In the paper, they explain their method that relies on Artificial Intelligence to correlate numbers of hashtag references in Instagram posts to the official incidences of flu as recorded by Finland’s National Institute for Health and Welfare.

Big (Instagram) Data

They report that they collected data from Instagram posts from 2012 to 2018, counting over 22,000 posts. All of the data collected was public data gathered by searching for hashtags with words such as “flu” and comparing the image content of posts showing boxes and bottles of flu drugs.

They used public health data to predict historical outbreaks of influenza viruses. Their method employs convolutional networks, such as Inception and Resnet together with a tree search algorithm called XGBoost. In their article, they show that the method is able to predict flu outbreaks in the final year of data, using only data from previous years.

This shows that social networks data holds significantly valuable information. However, we still have to be careful with our approaches to extracting this information and relying on it when making decisions. Also, there is the privacy concern present when dealing with public data, especially from social networks.

Using Artificial Intelligence to Help Blind People “See” Instagram Photos

30 November 2018

Using Artificial Intelligence to Help Blind People “See” Instagram Photos

In 2016, Facebook announced new features towards a more accessible social network. They announced that Facebook will use Artificial Intelligence to provide text descriptions of photos for the visually impaired…

In 2016, Facebook announced new features towards a more accessible social network. They announced that Facebook will use Artificial Intelligence to provide text descriptions of photos for the visually impaired people. However, Instagram – Facebook-owned, photo and video-sharing social network did not have these features implemented back then.

Today, Instagram has announced that it will increase accessibility by introducing two new improvements to make it easier for people with visual impairments to use the social network.

Similarly to what Facebook has already had, the new AI-based feature will provide text descriptions for photos to be automatically generated. The so-called automatic alternative text will be generated using object recognition technology developed at Facebook.

Together with this, a second feature is added to Instagram – custom alternative text. This will allow people to add a richer description when uploading a photo to make Instagram more accessible. According to the announcement in Instagram’s blog post, if there is no custom alternative text added, only in that case Instagram will automatically generate alternative description using its AI-based feature.

Instagram’s active users chart over time. Source: Statista.

There are more than 285 million visually impaired people in the world today. With Instagram being one of the most popular and fast-growing social media, a great number of people will benefit from these new features. The social network announced that these are only the first steps towards a more accessible Instagram. Many more are to be expected in the future.

Amazon’s “Machine Learning University” Now Available for Free

28 November 2018

Amazon’s “Machine Learning University” Now Available for Free

US’s electronic commerce and cloud computing giant Amazon has announced that its “Machine Learning University” will now be available for free. A collection of more than 30 online courses will…

US’s electronic commerce and cloud computing giant Amazon has announced that its “Machine Learning University” will now be available for free. A collection of more than 30 online courses will be made available for free through its subsidiary – Amazon Web Services (AWS).

The goal behind this step is to make Machine Learning available to all developers through AWS. Amazon expects to help developers, data scientists, data platform engineers, and business professionals in building more intelligent applications through machine learning.

 

“Regardless of where they are in their machine learning journey, one question I hear frequently from customers is: ‘how can we accelerate the growth of machine learning skills in our teams?’ ” – says Dr. Matt Wood, who made this big announcement in the name of Amazon.

 

“These courses, available as part of a new AWS Training and Certification Machine Learning offering, are now part of my answer.” – adds Dr. Wood, expressing enthusiasm that machine learning will become something very broadly available. Amazon expects that Machine Learning will go from something that was affordable only for big, well-funded organizations to become one of the main skills in all developers’ skillset.

 

The courses, which will be made available through Amazon’s “Machine Learning University”, include more than 45 hours of 30 self-service, self-paced digital courses and are provided for free. All the courses are organized such that each course starts with the fundamentals, and builds on those through real-world examples and labs, most of which are real problems encountered in Amazon.

The service offers specialized and tailored learning paths depending on your profession. Moreover, it offers an AWS certification that can help developers get recognized in the industry.

Amazon’s free machine learning courses are available here.

Satlink Now Uses AI to Revolutionize Fishing Activities

27 November 2018

Satlink Now Uses AI to Revolutionize Fishing Activities

Artificial Intelligence dramatically improves our world in many ways. AI solutions proved to be effective in dealing with simple, monotonous tasks but also with more complex activities — those requiring…

Artificial Intelligence dramatically improves our world in many ways. AI solutions proved to be effective in dealing with simple, monotonous tasks but also with more complex activities — those requiring processing of multiple signals, data streams and accumulated knowledge.

AI is now arriving even to the fishing business. A Spanish company, called Satlink has big ambitions regarding the employment of AI in tracking fishing activity.

Since its creation in 1992, Satlink has maintained a commitment to research, innovation, and development in the field of satellite telecommunications. The company developed a monitoring system called SeaTube that is able to track fishing activity. SeaTube is a video recording solution service installed on board vessels, allowing fishing companies to control and optimize the fishing activity on their boats.

Now, the next big step for Satlink is bringing AI to their successful monitoring system. The company has big plans in this direction and according to them, AI is capable of revolutionizing fishing. Satlink is confident that being able to analyze video data from their monitoring system will very much help the observation and optimization of fishing activities.

Fishermen are unhappy

However, immediate reactions from fishermen show that they are skeptical and oppose the idea of employment of AI in fishing.

Even if it can be technically explained (and that’s not always the case), AI’s decision-making process is usually too difficult for most people to understand. A thing that De la Cal, Satlink’s international business development manager pointed out: “Initially, companies are reluctant to use the technology, but when it’s explained, they get why they need it”.

The conservation of fish stocks and sustainable development are of primary interest. Both sides (fishermen and Satlink’s managers) agree on this, but however, take opposite views on the whole idea of AI tracking fishing activities.

Anyways, it is almost certain that AI will arrive in the fishing business. The question that remains is how well and how fast will companies and fishermen adopt the technology. And this will directly influence the impact that AI will have on the whole fishing business.

Trusting AI to Interpret Police Body-Cam Videos

26 November 2018

Trusting AI to Interpret Police Body-Cam Videos

Recently, we have seen how technology is taking its place in police investigations. In the past few years, many police departments, especially in the United States have introduced police body-cameras.…

Recently, we have seen how technology is taking its place in police investigations. In the past few years, many police departments, especially in the United States have introduced police body-cameras. The advancement of video technology allowed the police to gain more insights in some critical situations. Body-cam videos can be a tool for police accountability and for supporting officers who behave lawfully.

One of the pioneers in the field and largest producer of police body-cameras – Axon, has announced a new AI system for analyzing body-cam videos. Besides the body-cameras and the data storage space, Axon wants to broaden their scope with the new AI system. According to the company, their system will be able to interpret and describe in written form the recorded events and eventually help generate police reports from those descriptions.

Axon’s Body3 police body-camera.

However, police oversight is a critical application field and trusting commercial AI solutions might appear as a concern. Daniel Greene, a professor at the University of Maryland’s College of Information Studies and Genevieve Patterson, chief scientist in a computer-vision startup have addressed this problem in their article “Can We Trust Computers With Body-Cam Video?”.

According to them, many of the AI capabilities that Axon proposes to deploy aren’t mature enough. Moreover, they emphasize the fact that this kind of software is proprietary and therefore there would be no way to tell if the technology is free from bias.

A number of problems might arise when employing a technology based on AI, mainly because of biased training data. Axon claims that it will train its AI system using its existing database of body-camera data containing 30 petabytes of video data, collected by 200,000 officers. Again, since the AI system and the database are proprietary and there is no guarantee that the system will be unbiased.

As mentioned several times, automated video interpretation might be too optimistic goal, even for the technology that we (or Axon) have now. According to the authors, full video interpretation and report generation is currently impossible. Even more, the same issues of fairness, accountability, and transparency would remain.

However, according to them, there will be at least one useful feature in Axon’s new AI system. And that is obscuring the faces of people in body-camera videos so that they can’t be identified.

In conclusion, the technology is there, but it might not be mature enough to be employed. The authors of the article express concern and call for a public debate on this topic and everything around it. Such a debate might be of crucial importance before making any step in whatever direction.

Machine Learning Goes Quantum Level

22 November 2018
quantum computers

Machine Learning Goes Quantum Level

It was just a question of time when Machine Learning and Quantum Computing will meet. Although machine learning has been growing as a field at an incredible pace in the…

It was just a question of time when Machine Learning and Quantum Computing will meet. Although machine learning has been growing as a field at an incredible pace in the past decade, only now we are witnessing initial efforts to put machine learning on a quantum level.

The Perceptron algorithm is the simplest type of artificial neural network. It is a model of a single neuron that can be used for two-class classification problems and provides the basis for later developing much larger networks. Proposed by Rosenblatt, back in 1958, early experiments with the perceptron model had aroused unrealistic expectations. However, many years later, it served as a foundation of a whole field of machine learning and artificial intelligence.

Schematic of Rosenblatt's Perceptron Model
Schematic of Rosenblatt’s Perceptron Model

Francesco Tacchino and his colleagues at the University of Pavia in Italy have built the world’s first perceptron implemented on a quantum computer. In their paper, named “An Artificial Neuron Implemented on an Actual Quantum Processor”, they introduce a quantum information-based algorithm implementing the quantum computer version of a perceptron. They use a few qubit models of the perceptron to test on a real quantum computer. They show that this quantum model of a perceptron can be used as an elementary nonlinear classifier of simple patterns and in the experiments they used a 2×2 black&white image.

Tacchino’s perceptron model is taking a tensor (for example an image) as an input, combines it with a quantum weighting vector, and outputs a 0 or 1. Also, the procedure is fully general and could be implemented and executed on any platform capable of performing a universal quantum computation.

Similarly to Rosenblatt’s perceptron model, this “quantum perceptron” model has great potential and might pave the way for quantum artificial intelligence. It is possible that a new, different class of machine learning will appear and together, quantum computing and machine learning will bring us to AGI – Artificial General Intelligence and ASI – Artificial Super-Intelligence.

Microsoft’s Autonomous Systems Platform AirSim Now Available on Unity

21 November 2018
airsim

Microsoft’s Autonomous Systems Platform AirSim Now Available on Unity

Microsoft announced that they have partnered with Unity to make AirSim – their open-source simulator for autonomous systems, available on Unity. From now on, AirSim developers will get a powerful…

Microsoft announced that they have partnered with Unity to make AirSim – their open-source simulator for autonomous systems, available on Unity. From now on, AirSim developers will get a powerful and integrated development platform to experiment, develop, and evolve AI solutions for autonomous systems.

Back in February 2017, Microsoft Research released a new platform to help developers quickly build autonomous and robotic systems. The idea behind creating such a platform was to provide realistic simulation tools that will enable the study and execution of complex missions that might be time-consuming and risky in the real world. From a perspective of Artificial Intelligence and autonomous systems, it solves two major problems: the large data needs for training (considering the requirements of deep learning methods), and the ability to debug in a simulator.

On the other hand, Unity, throughout the years has become extremely popular, and nowadays it’s recognized as the most influential game engine. Microsoft Research and Unity decided on a partnership to provide support for AirSim on Unity.

AirSim, Unity and Microsoft’s Visual Studio now together provide a powerful and integrated development platform.

Previously, Microsoft’s powerful autonomous systems simulator was exclusively available for Unreal Engine. Now with the ability to use AirSim on the cross-platform game engine, Microsoft says it expects manufacturers and developers will be able to enhance their autonomous vehicle technology.

 

“Our goal with AirSim on Unity is to help manufacturers and researchers advance autonomous vehicle AI and deep learning. Unity gives its OEM clients the ability to develop realistic virtual environments in a cost-efficient manner and new ways to experiment in the world of autonomous and deep learning.”

Ashish Kapoor, Principal Researcher at Microsoft Research & AI.

 

A first, experimental version is available here. The Unity support is added to the main AirSim repository on Github in a separate folder (/Unity). It contains the AirSim wrapper code, car and drone demo projects, and documentation. The first version for Windows is beta, and a Linux version is said to be coming soon.

How Did OpenAI’s Bot Defeat the Team of Dota2 Semi-Pros?

21 August 2018
dota 2 bot beat team of professionals

How Did OpenAI’s Bot Defeat the Team of Dota2 Semi-Pros?

Games have become a widely adopted way of benchmarking the advances of artificial intelligence. Since Deep Blue won a chess game against Gary Kasparov in 1997, AI had many remarkable advances.…

Games have become a widely adopted way of benchmarking the advances of artificial intelligence. Since Deep Blue won a chess game against Gary Kasparov in 1997, AI had many remarkable advances. Machines have topped the best humans at most games held up as measures of human intellect, including chess, Scrabble, Othello, even Jeopardy!

A few years ago we witnessed Google’s AphaGo winning a game against world champion in Go (2,500-year-old game that’s exponentially more complex than chess and requires a bit of intelligence and strategy instead of just trying all possible combination of actions), showing the impressive advancement of artificial intelligence, especially in the past few years . Although impressive, it was still questionable if we are talking about intelligence and if this is the way to measure it.

Time for AI to become the best Dota2 team

It’s 2018 now, and we are witnessing another human champion defeated by an AI agent, this time in Dota2. A few months ago, OpenAI, a non-profit, San Francisco-based AI research company backed by Elon Musk, Reid Hoffman, and Peter Thiel has announced that they are planning an official Dota2 match between their Dota2-playing AI agent and a team of former pros in Dota2. Not chess, not Othello or Go, but one of the world’s most popular online strategy games: Valve’s Dota 2. It immediately caught the attention: both of the AI community and the Dota gamers community.

But, for OpenAI’s Dota player this was not the first game it was about to play. Previously, in its first generations, it was constrained only to 1-vs-1 matches (which are supposed to be less complex) but it had been tested many times before even going 5-vs.-5. In June this year, OpenAI’s Dota player managed to beat five teams of amateur players in June (in 5-vs.5 settings), including one made up of Valve employees. The challenge to beat a team of pros has been called right after this success.

To make the game manageable for AI a few (low impact) constraints were introduced in the game such as narrower hero pool to choose from and item delivery couriers that are invincible. Those have been said to not affect much the measure of OpenAI Five’s success.

On August 5th, OpenAI Five (as the AI agent player was called) defeated the team of former pros in a best-of-three series of games, much easier than expected! It won the first game without having a tower destroyed by the opponent, which is truly remarkable. Moreover, it showed intelligence in the decisions and strategic decision-making as a result of good team play.

How did they do it?

OpenAI Five is actually a set of five single-layer, 1,024-unit long short-term memory (LSTM) recurrent neural networks assigned to each hero (in the 5-vs.-5 setting). The fact that it is not a multi-modular super-complex system but just a set of simple recurrent neural networks, makes it even more impressive.

“People used to think that this kind of thing was impossible using today’s deep learning”, as told by Greg Brockman, one of the co-founders and CEO of OpenAI. The simple setting of five networks that do not even communicate with each other has defeated a team of human masters. Even though the heroes (in fact, the networks) don’t communicate, there is still a noticeable team chemistry. Actually, the teamwork is designed to be controlled by a hyperparameter that can be set from 0 to 1 and adds a weight onto how much each hero should care about its own individual rewards compared to the average reward from the whole team. In this way, OpenAI Five is able to come up with its own strategies.

Elon Msk on Dota 2 AlphaGo
Elon Musk on Twitter about OpenAI’s win against human pros

All of the networks get an input every four frames about the state of the game. Each input is in fact 20,000 mostly floating-point numbers that encode vital information such as the location and health of visible units, giving it access to the same knowledge that human team can have. On average, the time of reaction is faster than human’s giving a slight advantage to the OpenAI’s agents.

The networks were trained with OpenAI’s Proximal Policy Optimization and self-play. They “play” 180 years’ worth of games every day — 80 percent against themselves and 20 percent against past selves. To make all of this possible, OpenAI used 256 Nvidia Tesla P100 graphics cards and 128,000 processor cores on Google’s Cloud Platform.

OpenAI announced that they are working on improving OpenAI Five even more and that it will compete in “The International” – the biggest international Dota2 tournament, where teams can win millions of dollars. OpenAI Five will have a chance to become the champion of the famous Dota2 and to show again how much we have advanced Artificial Intelligence.