There has been an ongoing polarising discussion among experts in both the academic and industrial circles when it comes to artificial intelligence (AI), and the impact of its adoption on our continued existence. One side envisions multiple benefits over its widespread adoption; while the other strongly recommends extreme caution and security to prevent AI from taking complete control of every aspect of our lives.
The concept of AI was first introduced at a Dartmouth conference held in 1956, with the intention of proving the conjecture that every “…feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
Since then the field has gone through multiple cycles of AI summers and winters, constantly evolving through multiple incarnations before reaching its current state.
Over the past few decades, a subset of AI, machine learning (ML) has gained mainstream acceptance. ML uses statistical techniques to give computers the ability to “learn” with data, without it being explicitly programmed.
Another area currently gaining traction is deep learning (DL)-based approach. This has been successfully implemented in developing cognitive applications such as machine vision, natural language processing and generation, audio recognition, and medical image analysis among many others.
The recent surge of interest in the areas of ML and DL has primarily been driven by the availability of commodity hardware leading to a reduction in the cost of storage and processing, generating voluminous data from a multitude of devices and sensors, faster processing using distributed computing frameworks, and use of graphics processing units.
AI is a disruptive technology, making its presence felt in every aspect of our lives both personal and professional. It’s an essential component in wearable devices assisting in elderly care, video analysis to effectively identify and flag risks or defects, analysis of medical tests such as MRIs, CT scans to suggest the best treatment options for patients, among other things.
In the industrial segment, we are observing organisations leveraging AI in multiple areas such as production optimisation, worker safety, quality improvement, energy consumption, and wastage reduction to name a few.
AI is therefore enabling and enriching our interactions with multiple environments and giving us better control over our decision making.
Despite the benefits that AI can bring about, there is a significant amount of mistrust among the general public when it comes to its applications. From the fear of job security due to the prevalence of AI to doomsday predictions warning us against the rise of Skynet, there is no dearth of negative stories about AI.
This is also not helped by news about Deepfakes, China’s social credit system, smart devices sharing confidential conversations with private vendors, and Microsoft’s chatbot Tay posting inflammatory messages.
The concerns against AI can be broadly grouped as below:
- Bias in AI/ML systems: When data provided to the AI system is not balanced and representative, bias tends to get introduced into the system thereby negatively influencing the final decision making. There is a fair amount of evidence showcasing AI systems that have unintentionally made an unfair decision based on gender, race, religion and other socio-economic conditions in areas of job applications, loan approvals, patient treatment recommendations, security systems, etc.
- Black box systems: AI-based applications are massively complex, hard to interpret and often opaque in their implementation. Consequently, they do not provide any view of their underlying decision-making process. It is also worth noting that most decisions made by these systems are probabilistic in nature and not absolute, which the end-users are not made aware of in most cases. This makes it difficult to blindly trust results provided by these systems
- Concerns over privacy and data collection: Currently, there isn’t enough transparency around how organisations are collecting, storing, and using data that is captured in their AI systems. Incidents in the past few years with hackers gaining access to user data have further increased the negative perception around these applications.
The stories mentioned above obviate the need for a governance mechanism to monitor the rapid change of evolution in AI. More importantly, it indicates a need to educate the general populace of the benefits and risks of adopting AI.
Some ways of ensuring that and the steps taken by various organisations are as below:
- Adopting strict consumer data privacy regulations. This provides more control to consumers about what data, organisations can access them, and regulates its usage.
- Organisations like Microsoft, IBM and Google are building AI systems that focus on being fair or unbiased, socially beneficial, explainable, transparent and accountable to address concerns around AI ethics
- Projects like Alphabet’s DeepMind and the non-profit research firm OpenAI are investing in ideas to push the boundaries of AI and into enacting a path to safe AI.
- Actively seeking feedback from the general public to manage their expectations and to create more trust in the systems by providing a great end-user experience.
- Training the end-users and decision-makers relying on AI applications, to help them understand how their recommendations and queries are being resolved and the potential implications of these decisions.
AI is a disruptive technology, which is making its presence felt in every aspect of our lives. Most organisations understand that adopting AI will give them a competitive edge, but currently, very few organisations have a sound AI strategy in place. The true AI advantage lies in enabling business owners and individuals with appropriate insights powered by sound, data-driven analysis.
We welcome your comments at firstname.lastname@example.org.