As conversations get
artificially real, Prateek Chatterjee,
Sr. Vice President Corporate Communications & Marketing, NIIT Limited,
shares his insights on what this would mean to the human connect and the future
So, how would you know that the reply to your gmail is a customized response suggested by a machine? Would you trust it as much, if it were to be so? I asked this to the speaker Raghu Ravintula, founder Yellow Messenger, delivering a deep dive session on – 'Tech You Can Talk to: The New Frontier in NLP', last month at the 28th edition of NASSCOM Technology and Leadership Forum #NTLF2020.
As machines get smarter, they move beyond operational chat bots that reply to specific/ predictable customer queries to ones which suggest personalized answers to individual conversation timelines, like gmails, where each answer is unique. Currently, gmail suggests only boxed answer options like - thank you, how are you, good to hear from you, and so on and so forth.
Now imagine a not so distant future, where machines can draft out unique individual replies and not just suggest boxed answers to chose from? Would you take them as seriously? The speaker mentioned that, yes the machines are headed that way. These are complex #NLP (Natural Language Processing) led responses which will be possible soon, as machines are learning fast. So where does that leave trust? That grain of thought lead to a whole plethora of interesting possibilities that contextual conversations by AI and bots would lead us to. I've noted down a few points for us to chew upon.
Relevant to the times
So what does AI (Artificial Intelligence) mean to communication anyway? If virtual assistants can read my emails and remind me to pay my bills on time, shouldn't they prompt me to carry an umbrella if I'd be landing in London on an unexpected day of showers in March? Or that I'd need vaccines for yellow fever when I'd be travelling to Kenya for the Masai Mara in August this year? Promptly then, must it not also go ahead and tell me the authorized centres where I can get these shots from. Agreed that all this information may be a click away on Google; but wouldn't I like it if someone voluntarily told me about this like a caring friend, without me having to look it up?
When the complexity of NLP gets streamlined - and what with the pace of technology and machine learning, we know that day isn't far - AI would achieve a true human-machine interaction status wherein machines can be talked to, taught, and trusted the same way we do with humans. Having said that, think about what this would do to limit human interactions?
Trust in communication - What's law got to do with it?
Conversations are emotional, they are two way and they are personal. What we aren't talking enough is the possibilities of breach of trust due to Machine Learning's ability to give conversational and contextual replies. That day isn't far. With this, communication - the most important thread in professional relationships and the factor on which organizations are built on - is at stake. The truth is: when AI goes from regular scripted responses to conversations with context, it would require for users to exercise more caution than before. The change is inevitable and so, the onus would lie on the human brain to exercise its options. Lest we forget, the invention of AI is not to make us dumb; it is to make us smarter.
In times like today, when emails can be used as evidence in the court of law, people could blame technology for their own selfish motive. Rebuttals like 'the Machine sent it' or 'I clicked on it by accident' could be used as excuses to get out of tricky situations? And one day soon, when AI would be able to do sentiment analysis and opinion mining wherein it can detect your personal and interpersonal well-being (Is a person happy/stressed/angry etc. or gauge conversations between you, your spouse and kids?), there could be a lot of invasion of privacy or even the manipulation of it.
The emotional quotient
I find the Grammar Nazi app, Grammarly quite liberating. It has an AI powered tone-detector to show you how your email sounds (confident, rude, joyful, etc) before you hit the send button. Then again, this 'machine' has a lot to learn about who you are and how you sound in general - because the real fun would be when AI would be able to decipher sarcasm, humour and context from your social media - link it to sentiments and trends (about news, people and brands) and give that in reference during a conversation in progress. When one can't make out if they are speaking to a human or a bot - that would call for a lot speculations in the future.
Unraveling the mysteries of science
On the other hand, I think NLP can unwrap some of the bigger mysteries around language - like how does it work or how do we learn new languages? How do we put words into sentences and the affect they can have on someone? Like how does the brain link language to perception and how does one react? The union of written and visual communication - the balance between what we write and what we think. The beautiful mysteries of the science of communication could in fact help us understand human behavior at a far deeper level, don't you think?
Walking with the changing times
As responses get smarter, they may not always be original but will keep getting refined to match up to an intelligent human response. However, I would like to add here that AI is here to make our lives easier and hence, it should in fact free up more time for human interactions and not limit it. While the machine is busy picking your peculiarities, never give it the power to take away from what's unique to you. Don't delegate to the machine - write your own personalized mails instead. Cherish the words. Know that it's the only way we'd be able to trust each other in the times of AI.