These days, there’s considerable talk and hoopla surrounding artificial intelligence/AI. Tech companies on a worldwide basis are talking about how their products are complying with AI requirements.
And that includes Rambus with its lineup of new GDDR6 and HBM2 PHYs. These products provide SoC and system designers the right solutions to move onward with next generation AI, data center, ADAS, and 5G projects, among others with high speed data rates.
Meanwhile, one legal expert is casting a modicum of doubt. This is making some AI leaders to stop and scratch their heads and wonder what it means to them and their products.
The question being posed is this: “Is it too early to start thinking about regulating artificial intelligence?”
You ask, what regulations? What is there to regulate about AI?
Those questions and associated arguments are reported in security editor Warwick Ashford’s article in ComputerWeekly.com. Ashford centers his piece on an earlier talk that Dr. Karsten Kinast made at the European Identify & Cloud Conference 2019. Dr. Kinast is described as a legal expert and fellow analyst at KuppingerCole, an international analyst organization.
Referring to AI in a universal sense, Dr. Kinast prefaced his argument for AI regulations by saying that regulation needs to be introduced as soon as possible in order to ensure that AI brings maximum benefit to society.
It is the strong belief of Dr. Kinast that it’s important to assure positive AI goals are reached. But then he also cautions that “no single technology company or small set of tech companies is allowed to dominate setting the rules and the agenda for AI.”
Ashford writes that inroads are being made along those lines. He reports that in early April 2019, the European Commission (EC) published a set of principles around the ethical application of AI. Plus, it announced a major pilot in which the guidelines will be applied.
Dr. Kinast argued that these EC actions are not enough, but that more must be done on a global scale. “Once truly self-learning AI is a reality, it will be too late to start trying to regulate it.
As an added precaution, he said that “we need to guard against AI regulating us in the same way technology is already regulating how we communicate, how we think, how we look for solutions to problems and how we interact with others.