“There are three types of people in this world: those who make things happen, those who watch things happen, and those who wonder what happened.”
With AI ruling the world, if you don’t belong to the first group of peoople (i.e. creators) – you are in for a tough time ahead.
It doesn’t matter whether you are a CXO or a tech entrepreneur or a product manager or a software engineer – AI is
going to change the world changing the world around us. And if you wanna be the one who makes things happen and be in the know of what’s really happening in the AI space, read many different perspectives on AI – including scientists, creators, researchers etc.
Join 12,000+ CXOs who are reading curated AI news via the newsletter
UN secretary general, António Guterres, terms lethal autonomous weapons systems as “morally repugnant” but still the talks being held at UN for a pre-emptive ban on killer robots have been thwarted by a group of countries which include UK, US, Australia, Israel and Russia.
Almost all of these countries have already made substantial investment in creating such systems (lethal drones were first off the block) in order to win the next arms race being fueled by AI.
“We urgently need a ban on killer robots.
The majority of states get it.
A rapidly growing proportion of the tech community get it.
Civil society gets it.
But a handful of countries including the UK are blocking progress at the UN.”
Amazon has received a lot of flak in recent times over it's deployment of facial recognition and other technologies with law enforcement agencies, moreover when these systems have been found to be lacking on many ethical and humane fronts, and the company has repeatedly denied to submit it's system for independent inspection and authorisation.
The company announced recently that it is working with the US National Science Foundation to give a total of $10 million in research grants over the next three years to help improve fairness in artificial intelligence.
"We believe we must work closely with academic researchers to develop innovative solutions that address issues of fairness, transparency and accountability and to ensure that biases in data don't get embedded in the systems we create.
Funded projects will help to enable broadened acceptance of AI systems, helping the US further capitalize on the potential of AI technologies."
Canada has issued a directive titled "Directive on Automated Decision-Making".
The objective of this Directive is to ensure that Automated Decision Systems are deployed in a manner that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law.
It applies to any system, tool, or statistical models used to recommend or make an administrative decision about a client (citizens, businesses, non-Canadians, or organizations, e.g., non-profit or internal to government).
To implement the AI principles which the company announced in June 2018, Google has announced formation of a global council , Advanced Technology External Advisory Council (ATEAC).
This group will consider some of Google's most complex challenges that arise under it's AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.
The eight member council consists of members from diverse fields such as diplomacy, philosophy, public policy, psychology, behavioural economy and computation.
When AI is going to change the humanity forever (already has done quite a lot in fact), those who are the front runners and the followers alike, have this responsibility to ensure, that they are not creating another 'Frankenstein'.
Because there would not be a turning back, when the shit hits the fan.
When we have moral code of conduct in our daily lives, ingrained since childhood; it is imperative that our creations, be it the algorithms or machines, also abide by them.
Because in the machines, we 'humans' get manifested after all.
The creators responsibility lies in ensuring that both algorithms and resulting machines which are created are fair, accurate and intelligent enough to avoid any tangible or intangible (bias, inequality) harm.
And they should be open about how their creation works and the way it is going to influence the lives of the people using it.
“When algorithms affect human rights, public values or public decision-making, we need oversight and transparency”
“When people fly in a plane, they do not need to know exactly how the plane works to feel safe flying. They just need to know that the plane abides by certain aviation safety regulations”.
What is the reality? or What is the truth?
These two questions have confounded the best minds of humanity for centuries, leading to quests which often lasted for lifetime.
But in our lives, we do not have to worry about such questions.
And it is so because, there is no 'reality' left at all.
We all are 'augmented' now, as we see the world through 5 inches (more or less) of glass.
It is not difficult to see that AI and related technologies are keeping us in our own silos and isolating us from the reality. Right from the pictures we see to the news we read.
We are getting lost in 'connections' at the cost of forgoing the real ones.
Are we really moving ahead with technology or we are regressing?
"In summary, if our emerging technology is going to merge humanity with the machine, augmenting our human intelligence with artificial intelligence, we need to manage the risk! The risk of isolation, the risk of losing our art of conversation, the risk of convincing ourselves that our pre-conceived biases are not only factually correct, but good."
The algorithms which we see in the consumer world today, are all meant to be 'for profit'.
Their primary aim is to maximise the profits for the corporations. And that's why they do not have any 'societal' concerns built in them.
Corporations getting benefited at the perils of the society is the new endgame, and we are all party to it!
What should we do?
Do we build on technologies that are autonomous, like drones and robots?
Where do we draw the line? Do we start with the end in mind, with AI technologies?
How will it end?
AI does not understand the human nuances, it is up to us to introspect and build a system which is meant for respect to human dignity and fundamental rights.
We have a big responsibility about educating general public, on what is really going on in the AI world.
And this is just not about the cool stuff, but the real stuff, about how it is going to impact each one of us (in 360 degrees way), and how do we take it forward, morally.