September 2, 2020 7:24 am
Disclaimer 1 – I am an Engineer by profession and I am trained to judge by measurements. I am taught to be process oriented and to be focused on repeatability. Again, I am an engineer by profession and I take pride in solving everyday problems.
Disclaimer 2 – Whilst the hardcore AI evangelists will have their eyebrows raised when I say AI ( As most of the what the world is talking as miracle systems are predominantly Deep Learning and Machine Learning Systems) I will still go with that terminology for interest of ‘Oh I get it mortals’ (Like me).
With these disclaimers let me start. I can’t hold myself back amidst this discussion involving adoption of AI and the new prefixes that are unfolding for AI (By hour) and many a times these are being perceived/marketed and projected as new paradigms. You don’t believe me or get it (on the prefixes). Look at the recent trending AI prefixes – ‘Repeatable AI’, ‘Explainable AI’, ‘Un-Biased AI’, ‘Fair AI’, ‘Trustworthy AI’, ‘Ethical AI’ and so on. As such the debate on AI and DL was handful, these new dimensions (Oh yeah, I know them, you want to read on, google dimensions in DL :)) are adding more hidden layers in the minds of potential adopters. And now there are more things to confuse, scuttle project implementations and many more open threads getting created in mind of early adopters.
Wait a second, am I saying these are not important questions that needs to be answered ? The thinking judgement that you have reached until now(By reading the first para on which camp I am in) is a classical example of how years of learning have made you predict a plausible outcome that I am hinting, though I have not explicitly spelt it. And what has helped you do that, is years of supervised learning.
The systems that we are talking are taught/fed information through a supervised manner. The learnings, inferences and predictions are no doubt a function of what it has seen. The challenge is, when we expose such ‘heavily supervised’ system to do ‘un-supervised’ jobs(Hence the heading of article. Read un-supervised as final decision/outcome/action being executed without any manual intervention). By virtue of common sense, in a real world we will not do this with a rookie who has been trained, until the rookie is heavily tested and exposed to real world systems. Rather there are clear milestones set where the active to passive to nil supervision transition takes place. Working with machines should not be much different and should be given logical milestones to improve.
More importantly, there are plenty of business cases which do not warrant the need for un-supervised job. For. example i wouldn’t dare to call an AI (Ok DL) system that can detect cancer as an un-supervised activity. Since such a prediction would be overseen by a Doctor before any treatment is made, just that such a prediction/detection could ease the job of Dr, and can also help in consolidating learnings across many expert into one single system (think of it as a super observant technician). The users of such system are not expected rest their common sense and say ‘But system said so’.
In the next few article series I would like to share my experience as an engineer, on working with AI, implementing systems with AI and reaping benefits from such implementations. Whilst the quest for the ‘AI Holy Grail’ is on, We as engineering community should also be concerned about alignment of various roles to reap success even in its current form (For example, how does a Traditional BA differ from a BA who is working on AI system, how can he suspend many of the concerns raised above. Or how a traditional QA vs a AI System QA should function, or even how can a functional test case be written for AI Systems). One of the largest mechanism which we still do not have much clue on its workings, is the Human Brain, yet we trust it and function every day with it. AI systems are loaded with measurements that can be harnessed and can help to calm the nerves to a great extent.
As I started, I am an engineer by profession and I am taught to work with measurements, and in these AI measurements I trust.