Amazon makes its machine learning courses freely available for curious minds.

Amazon has announced that the machine learning courses that it uses to train its own engineers, are now freely available to all.

According to a released statement by Matt Wood, an eight-year veteran of Amazon and a general manager of deep learning and AI at the company, there are more than 45 hours across 30 different courses that developers, data scientists, data platform engineers and business professionals can take gratis.

Each starts with the fundamentals, and builds on those through real-world examples and labs, allowing developers to explore machine learning through some fun problems we have had to solve at Amazon. These include predicting gift wrapping eligibility, optimizing delivery routes, or predicting entertainment award nominations using data from IMDb (an Amazon subsidiary). Coursework helps consolidate best practices, and demonstrates how to get started on a range of AWS machine learning services, including Amazon SageMaker, AWS DeepLens, Amazon Rekognition, Amazon Lex, Amazon Polly, and Amazon Comprehend.


The digital courses are now available at no charge at aws.training/machinelearning and you only pay for the services you use in labs and exams during your training.

Google admits the shortcomings of Machine Learning : Still can’t figure out a cute cat

Google units, Google Brain and Deep Mind have published a paper together. It details the short-comings of Machine Learning . And then gives a few techniques which could help the technology to graduate to “Artificial General Intelligence”. More akin to common day human reasoning.
“The research acknowledges that current “deep learning” approaches to AI have failed to achieve the ability to even approach human cognitive skills.”
Though the research has not dumped the entire theory of neural networks, basis on which the entire machine learning works.
“The paper, “Relational inductive biases, deep learning, and graph networks,” posted on , is authored by Peter W. Battaglia of Google’s DeepMind unit, along with colleagues from Google Brain, MIT, and the University of Edinburgh.”
The paper says that at most , the advances made in Machine Learning has been able to do is to generalise the human experience. A kind of brute force method, employing cheap data and computing resources.
Like identifying a cat from hundreds of millions of images. It is an instance of a general human experience, where a human distinguishes between a cat and any other animal. But beyond generalisation, say which cat a human finds cute, is where no Machine Learning system can reach. The very fact that, there is no verifiable and authentic data available for it, could be one of the reason. What is ‘cute’ for one human, is not for other. And that’s where all Machine Learning falls flat. This also explains, why all the hoopla about ML and AI is blown out of proportions.
The answer the paper proposes is to use, ‘Graph Networks’. They represent human cognition better. By mapping entities and objects with their relationships mapped out.
Human cognition makes the strong assumption that the world consists of objects and relations. And because GNs [graph networks] make a similar assumption, their behaviour tends to be more interpret-able [via]

Machine Learning: Go out-of-the-box. But not IN a Black Box

AI researchers are simply stumbling in the dark. ML is still a Black Box.
For the interpretability problem is not allowing us to see how a given AI came to its conclusions.
Ali Rahimi, an AI-researcher in California, finds company in François Chollet, a computer scientist at Google in Mountain View, as they worry about AI’s reproducibility problem wherein, thanks to inconsistencies, AI innovators still falter in learning from each other and in breaking down what’s going on under the hood.
For instance, think of ‘stochastic gradient descent’ and how after thousands of academic papers and numerous ways of applying it, we still tip toe on trial-and-error. Hell yeah, it is sexy to embrace deep-learning and all the adjacent stuff, but watch for wasted effort and suboptimal performance. Try algorithm-testing for various scenarios or ablation studies or Computer Scientist Ben Recht’s idea of shrinking it into a ‘toy problem’ may be.
Ask yourself if you are petting a Schrodinger’s cat.  It may stink.
[Read further]

Sex-crime, trafficking and Machine Learning: Avengers at last


Can you trace a stolen soap to a sex-trade victim? What if, with AI, we could connect dots lurking in some dark streets that come alive in shadows? Like retail theft, ads, payment mechanisms and language?
This miracle is already happening. If you ask Prof. Eric Schles from New York University and researchers from some American universities, that is.
Some machine-learning algorithms shrink-wrapped in free suites could be just what the good guy needs in tracking and catching bad guys here. Just work with patterns in sex ads, pick cryptocurrency wallets and smoke out ring leaders of illegal prostitution that are operating online.
This echoes with what researchers Renata A. Konrad and others from Worcester Polytechnic Institute talk in a paper on how quantitative approaches can be used to crack trafficking networks, or to tap patterns in data, advertisements from traffickers on social media and other behavior insights.
ML becomes the superhero here by wielding matrix completion for cleaning up falsified information and filling in missing data. Even network analysis computational tools (and Naive Bayesian Classifiers) can help in taking the fizz off under-ground networks (and with automatic removal of online prostitution-postings).
Kudos. Now let’s take it further to child pornography-crackdowns please. Also use ML algorithms to dive beyond online pimps.
Wait, did we have to really use DeMo to fight Black Money alone?

The one thing we thought Machines will never have

Ears.
Or at least the knack and art of using them. But stop assuming that a machine, no matter how much of a smarty-pants it is, would never be able to suss out a musical genre. They can now get why Lata ji or Rihanna gets heads swooning . Coz some MIT researchers have brought out a radical model able to replicate human performance on auditory tasks too.
Yes, machines can now shine on sensory tasks as well the way Neuroscience Prof. Josh McDermott from the Department of Brain and Cognitive Sciences at MIT has spelt it out; and this is thanks to better theoretical foundations that were not available before.
Deep neural networks, inching closer to human-brain modeling – Behold those Grammy-s Darling (and of course, there is Google Duplex)