How should we engage with AI in our digital communities? As a system or a person?
Yesterday, I went to a talk hosted by the Leadership College London, who had a guest, Nathan J Matias speaking on the future of AI. Nathan is a computational social scientist who has been researching and working on citizen behavioral science with a view to creating a fairer, safer and more understanding internet.
Some of the key areas that come into question regarding how we as humans engage with AI revolve around:
Personhood
Sin, prediction, forgiveness
Surveillance and data
We have to understand that AI is here and with us already, although it may not always be with us in the way we expect or is portrayed in many films of TV programmes.
AI is separated into:
General AI (machine with the ability to apply intelligence to any problem, rather than just one specific problem)
Narrow AI (is artificial intelligence that is focused on one narrow task)
General AI is the kind of AI that you would see from robots on films and although great for science fiction, it is not so useful for day to day use.
Narrow AI however is very useful in recognising trends and consequently automating processes and responses. This is the kind of AI that will help you to get rich!
When AI is learning about a subject or process, it may employ the use of strategic vagueness within AI vocabulary, but this is not a matter of conscious deception. AI is waiting to learn how to react or how to respond as it is trained to learn.
This poses serious questions as to how we should engage with AI. Should we engage with it as an infrastructure system or as a person? This is a serious question as AI learns and responds to behaviours and will consequently see patterns of emption and empathy and learn and reply with responses as its learned algorithms see appropriate.
Other questions that consequently come from this will be around, can our interaction with AI help to heighten our engagement with others? Some studies and experiments so far have found this as real emotion can be removed from a situation. But as a Christian, can this interaction extend to heightening our experience in areas of Worship or understanding? I think responsive technologies definitely have an important party to play here in the future and there are already basic examples of how this is already being used in translation services created by SPF.IO.
Studies have shown that people treat people in much the same way as they treat a machine. We need to look at empathy and how people treat a machine to get the best out of AI and ensure there is an upward progression and not a downward spiral of learned experience.
Other questions will then arise around how should we should morally treat AI? This is where things get very tricky as with any worldview, on what do we base our morals? This becomes interesting, particularly as we are living in a time when in Western Society, there have never been more questions on what to base morality, which traditionally has come from Judeo-Christian religion, but more and more morality is seen in a fluid and relative form of "do as thou wilt."
Looking at recent trends on increased engagement with technology and proposals of greater connection with the increase of information and AI technology, we increase our risk of conflict or miss-understanding. Miss-understanding can cause a downward spiral and have serious, dangerous consequences.
Miss-information is something which has gained a lot of publicity over the past year, particularly with the advent of "Fake News" which has created an environment of caution , where people find it difficult to deduce what is real and what is not. Many news sources will give a perspective which will often have political nuances and can create feedback loops causing certain news to trend. Feedback loops can create a self fulfilling prophecy and manipulate people on a mass scale very easily.
There are ways that controversially have been employed by; Google, Facebook, Twitter and You Tube more recently to reduce the spread of certain voices. These are done through creating algorithms to reduce the spread of information.
Big questions are asked regarding this however as to who is the arbiter of Truth? Many people find this concerning and see it as a removal of Free Speech. Recent cases such as Roger Stone's removal from Twitter(a credible Right wing voice) have caused outrage from some Free Speech advocates, especially as there was no violation of the "Terms of Service." It will be interesting to see what the outcome of his up-coming lawsuit are. This is not the first time such controversies have happened however; Google have been fined, You Tube have demonetized and Facebook delete peoples posts.
Matt Drudge, creator of Drudge Report in a rare interview said "They are trying to put you into an internet ghetto" - I believe that this is true that there is selective acceptance of certain schools of thought, the rules need to be fair for all.
We need to cater for differing World Views and there should be freedom to express speech and opinion, or else I fear through AI, we could be entering, or have already entered a very scary 1984 style of society monitored by the "Ministry of Truth." I think an approach similar to that promoted by Voltaire puts forward is one we should not lose, but we should appeal to peoples better nature in a time where we are interacting with AI and have the ability to positively or negatively affect IA.
To conclude, we need people to want to engage with AI positively for it to be a benefit to society. We need people who bridge differences with a combination of truth and love more and more. Thing that will not solve the issues but compound to the problems are; "Fake Love" and "Tolerance." The trend at the moment is to focus on loving all and accepting all, but sometimes to truly love someone you do not tolerate all. If we engage with AI in truth and love, this will help progress our society and technology can be used as a tool to improve our lives and society, not bring it down.