With one or other battle, war or conflict raging in many parts of world, what this world needs is peace.
And as Nobel prize winner and father of green revolution, Dr.Norman Borlaug famously said, “There cannot be any peace on hungry stomach”. If people are well fed and hence happy, they are less likely to engage in conflicts.
A group of researchers from Cornell University would use ML techniques to analyse food and market conditions, to predict poverty and malnutrition in poorest region of the planet.
The method would use available satellite data to measure solar induced chlorophyll fluorescence (SIF). These are the photons emitted during photosynthesis to measure agricultural productivity. They will combine this data with land surface temperature and food price data.
The surface temperature data captures the moisture content in near real time, providing input about crop health and drought risk. Super imposing this data with food prices would help infer, how much farmers are able to earn and how much food they can afford to buy as a consumer.
From these data, the model will generate maps showing factors like estimated prevalence of various poverty rates or populations of children or women at risk of malnutrition. The maps will not only make it easy to identify specific regions needing help, they will also show how conditions evolve over time to help policy-makers or aid organizations make decisions (via).
Instead of reacting on a humanitarian crisis, now we can respond. in case of drought or crop failure, the ML system can help forecast the event and ensure timely help reaches.
Sydney Harbour Bridge weighs 52,800 tonnes and it is the first iconic structure which we see, lit up with fireworks on NY Eve/Day.
The bridge signifies Sydney and even, Australia.
At 134 metres,she is the world’s tallest steel arch bridge, it’s deck spanning 1149 metres. And she is 86 years young! The residents of New South Wales, affectionately call her, ‘the coathanger’.
To maintain the ‘old matriarch of Sydney harbour’; Roads and Maritime Services(RMS) is deploying a computing network of 2,400 sensors to measure the vibrations in metal. They then apply machine learning algorithms to sensor data, so that the crew is alerted, even before the cracks and faults appear.
Earlier, most of the maintenance work was done through visual inspection, which was not only hard and arduous, but also very risky. It also took a lot of time and money and was thus limited in it’s scope.
The sensor unit consists of three low-cost accelerometers and a small Linux-based processor. The units to the arches – one per arch – with epoxy glue, and linked with a daisy-chained Ethernet network and a 1.2 kilometre long fibre optic backbone.
The vibration caused by a vehicle passing over the bridge is different, than that caused by a crack occurring. Thus an algorithm can be trained to detect, normal and abnormal behaviour. Temperature and weather effects are incorporated in the ML model as well, as they have a deep effect on the structure. On hot days, the bridge can become taller by 10 cms.
Currently, all the 800 arches connect to sensors and the web network, the plan is to further track all the other major parts of the bridge.
The system alerts the crew through both email and SMS alerts. The manual inspection would continue in the future as well.
The sensors and ML model not acting as a substitute, but rather as an addendum resource.
People have stored more than 20 billion image and PDF files in Dropbox. Of those files, 10-20% are photos of documents—like receipts and whiteboard images—as opposed to documents themselves. These are now candidates for automatic image text recognition. Similarly, 25% of these PDFs are scans of documents that are also candidates for automatic text recognition.
From a computer vision perspective, although a document and an image of a document might appear very similar to a person, there’s a big difference in the way computers see these files: a document can be indexed for search, allowing users to find it by entering some words from the file; an image is opaque to search indexing systems, since it appears as only a collection of pixels. Image formats (like JPEG, PNG, or GIF) are generally not indexable because they have no text content, while text-based document formats (like TXT, DOCX, or HTML) are generally indexable. PDF files fall in-between because they can contain a mixture of text and image content. Automatic image text recognition is able to intelligently distinguish between all of these documents to categorize data contained within.
After the recent uproar about “fake news” , it is again all quiet in India. Well, not until, another lynching or riot happens. Whatsapp is busy spreading the use of it’s service, rather than strengthening it.
The issue is going to haunt us again, very soon.
There are two big components of “fake news”: misinformation and extreme bias. If we add the veracity of source, then we can pin-point to a great extent, whether a news article is fake or not!
Facebook and others are employing human moderators, “to detect and delete” fake articles. But in this age, as articles get published by thousands with the use of AI, it’s the technology only, which can scale.
The source part is the key here, after all in news, credibility is everything. A “Washington Post’s” credibility is gazillion times better compared to any news app.
A publication which has bias and unauthenticated reporting, is easy to flag, forever.
MIT’s Computer Science and Artificial Intelligence Lab and Qatar Computing Research Institute are developing a new machine learning system, designed to evaluate not only individual articles, but entire news sources. The system is programmed to classify news sources for general accuracy and political bias.
If a website has published fake news before, there’s a good chance they’ll do it again, By automatically scraping data about these sites, the hope is that our system can help figure out which ones are likely to do it in the first place.
The data was fed into the system through Media Bias / Fact Check,
which is an independent and non-partisan resource, classifying news sources on political bias and accuracy.
On top of this dataset, the system was trained to classify the bias and accuracy of a source based on five features: textual, syntactic and semantic article analysis; its Wikipedia page; Twitter account; URL structure; and web
While the algorithm would also reflect the bias of the creators, it is definitely one of the most potent attempt to manage the menace.
Goldman Sachs used machine learning to run 200,000 models, mining data on team and individual player attributes, to help forecast specific match scores.
They then simulated 1 million possible variations of the tournament in order to calculate the probability of advancement for each squad.
And the winner?
- Brazil is expected to win its sixth World Cup title, defeating Germany in the final by an unrounded score of 1.70 to 1.41
- While France has better overall odds of lifting the trophy, its expected meeting with Brazil in the semi-finals has it falling short of the title match
- England is expected to make it to the quarter-final stage (and will lose to Germany).
- Spain and Argentina are forecast to underperform, both losing in the quarter-finals
- Russia isn’t expected to make it out of the group stage at all, despite its role as tournament host
- Goldman sees Saudi Arabia as the surprising team that will advance out of the group stage, ahead of the host nation
[Download the report]
Machine learning wins.
AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.
it looks at “only” 80,000 positions per second, compared to Stockfish’s 70 million per second.
AlphaZero was not “taught” the game in the traditional sense.
The science behind personalized music recommendations!
Spotify actually doesn’t use a single revolutionary recommendation model—instead, they mix together some of the best strategies used by other services to create its own uniquely powerful Discovery engine.
To create Discover Weekly, there are three main types of recommendation models that Spotify employs:
- Collaborative Filtering models (i.e. the ones that Last.fm originally used), which work by analyzing your behavior and others’ behavior.
- Natural Language Processing (NLP) models, which work by analyzing text.
- Audio models, which work by analyzing the raw audio tracks themselves.
[This is a #curated article.]
Humans have another competitor in life these days. Machines.
The recent trend of machine learning in the Information Technology (IT) sector, as reported widely in the media, may be a cause for alarm, concern, and uncertainty. Yet, could there be a solid lesson to be learnt by both professionals and students alike? Is there something fundamentally wrong that we are assuming about the knowledge and skills we acquire?
The answer to both questions is a resounding “yes”. That’s why many visionaries propose that we now live in an era of “Digital Darwinism”. The theory of evolution posits that homosapiens are a result of several unlearnt processes—the tail is one such—but we evolved biologically over millions of years. Over time, humans also relearnt their utilitarian life without tails. In our digitizing world, however, our brains and our practices are evolving in highly compressed time. Disruption is the norm in today’s professions. In the information age we live in, processes, products, and services need the kind of innovative practices that demand each professional to not merely apply prior learning, but to use previous education to develop new disruptions. This means that a professional must be a perennial student, often self-trained to research, think, and innovate. More demanding is the challenge that the skills we learnt must constantly be unlearnt.
While not all unlearning needs to be so paradigmatic in nature, unlearning’s fundamental principle is that our existing skills need a re-examination under a new environment. For example, what is the biggest challenge that confronts our IT professionals today? It is the perceived threat of automation? Yet, for example, if robots are a threat to human endeavour, the opportunities in judgment-based, human interaction-based, and creative components can never be replaced by them.
Machines understand that we live in a many-to-many, multidirectional transactional world. Many of us humans, though, still operate on a more conventional ‘operating system’ in our minds.
Gartner’s Digital Trend Spotter in 2017 ranks the most popular trends in learning, applying, and developing, including: Machine learning, intelligent apps, intelligent things (such as robots, drones, autonomous vehicles), virtual/augmented reality, and digital twins (models of physical things). The industries that will grow most tech-enabled are logistics, healthcare, electricity, automotive, and consumer goods.By as early as 2019, 40% of IT projects will create new digital services and revenue streams.
Yet it is human interaction that will prove to be a big differentiator in tomorrow’s professional universe, while using automation for backend work. Take, for example, the verticality of social media marketing. I consider this a quintessential combination of automation and human endeavour—the automated use of big data and their bots-enabled mining triggers the very human skill of storytelling. Storytelling has not been on top of any school’s curricular agenda so far. Yet it reflects the kind of re-skilling that is needed in the market. It is in high demand, with very few takers. We are largely yet to revisit our dormant storytelling skills. On the other hand, traditional storytellers such as journalists, have been forced to shed their conventional skills and adopt new styles of storytelling.
Of course, like all other technology, institutional evolution will precede individual evolution. Investing in new technology and innovating their products and services form the top rungs of businesses’ agendas plays a big role to cope with transformation. A Cisco prediction says 40% of the companies of today will not exist in 10 years from now. According to a KPMG-CEO Outlook survey, more than 50% of Indian CEOs said their organization will be “completely transformed” in the next three years. This is not a surprising survey report: According to the World Economic Forum, 35% of today’s skills will have changed a mere five years from now, given a blurring of human and robotic experiences that are increasingly engulfing us. Yet only 27% of the world’s businesses have a coherent digital strategy that creates value for the customer.
Given the enormity of the challenge, companies are overwhelmed by the immediate need to retrain and reskill their employees, so there is a dire requirement to help companies overcome their need to retrain more than 4 million employees in digital skills of the future. The sustainable solution is to train hundreds and thousands of undergraduate students in digital skills so that they leave campuses equipped with those skills. This is a massive problem that deserves immediate attention and intervention.
Unlearning is not the same as not learning. Learning a skill is a necessary step before unlearning it, akin to the evolution principle. The trouble with learning is unlearning how we think about learning, says Mark Bonchek in his Harvard Business Review article, “Why the Problem with Unlearning is Learning”. This is because, as the fancy ‘learning corporations’ have swiftly realized, unlearning must happen at a deeper level of the fundamental principles on which learning is founded. For example, India is unlearning the earlier socialistic principles after the nation’s economy was liberalized. Another example is the need for re-skilling ourselves to adjust to the new digitized paradigm around us.
The trick to unlearning and relearning is that it must be a constant process—where we must acknowledge that our current skills are merely transient and work towards re-engineering them. While Bill Gates famously predicted most of the current digital and robotic trends back in 1999, he, like most disruptors, would caution us: If technological inventions must survive, humans must evolve to meet and adopt them.
The Frankensteinian alarm—in popular sci-fi movies and in our real world—that machines may take over our world and our souls may still be a fantasy. Yet, machine-learning is one of the most important professionally employable skills today.
The irony, if lost on us, can cost us our relevance in the strife of a world that is constantly morphing.
[Written by Rajan Venkataraman, Chief Digital Officer, NIIT Ltd]
NextBigWhat invites insights from the trenches and if you have a thing or two to share, please go ahead and share.