The world of artificial intelligence is constantly evolving, and certainly so is the legal and regulatory environment in which it exists. Michael Hayes, Senior Manager of Government Affairs at the Consumer Technology Association (CTA) is focused on these emerging technology challenges that hit up against existing laws and regulations. Michael previously worked on Capitol Hill on patent reform, stopping patent trolls. As part of his current role, Michael makes sure that the emerging policy discussion is framed in a way that makes sure that the technology can thrive and provide competitive advantages for companies implementing them without introducing new risks.
Data and Privacy Discussions around AI
There has been a lot of concern about corporation and government’s use of data, and the role of privacy. Worldwide, governments are implementing rules and regulations to help safeguard the use of data, most notably the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). In 2018, the CTA started an artificial intelligence (AI) working group made up of companies in the AI space to focus on setting AI policy principles, of which privacy and the use of data is paramount.
Since data fuels AI systems, and as high quality data sets are needed to have AI properly work and provide expected data, companies are making increasing use of data with or without the knowledge of their users and customers. However, at the same time, the successful use of AI is predicated on public trust. The public wants to know how and why AI is benefiting them, and most importantly the public wants to know how their data is being used. Organizations need to craft a reasonable balance between what data is needed to have AI properly work and people’s trust that data is being used in a reasonable manner.
However, that balance is not necessarily the same from one nation to another. What might be acceptable use for data in the United States might not be acceptable to those in Europe or Asia. Michael believes that establishing definitions is highly important to AI, but also very difficult to do. Those who operate in the AI ecosystem know that there is no commonly accepted definition of AI. In fact, we can’t even define intelligence itself. If you ask a room of people to define AI you will likely get a slightly different definition from everyone. Yet, in order to inform policy makers and have a general framework on which to form laws and regulations, sooner or later a definition for AI will end up in a statute or legislation. Indeed, AI laws, policy, and regulation are already coming.
In 2018 Congress had a few hearings on AI. The first hearing focused on bringing in experts from industry and academia to educate the committee on how AI was being deployed and challenges it was facing. The second hearing had people from the government speaking about how they can use AI in their agencies to make their work more effective. Finally, the third hearing they brought in professionals, including CTA’s CEO, to talk about charting a course forward on policy and where the sub-committee should focus. These hearings addressed some critical issues such as privacy and job displacement. This was helpful to understand the issues before setting legislation on the matter. Rather than rush legislation and be the first out the gate, the sub-committee wanted to be the best informed before creating legislation. Other countries have had big roll outs of AI policy such as in France and China. There is global competition in this space and if we can move forward together between the industry and policy makers, Michael believes that the US can maintain its leadership in AI.
The industry should be looking to get ahead of the government. If the industry does not come to a consensus on the definition, then the process will be given to less-informed legislators. By being active participants in trying to define these terms and representing the needs of the industry, Michael and the CTA believe that they can provide more informed insights to help guide the industry. Policy makers that are working on legislation related to AI have participation from the industry in helping them get to a point to define AI terms and Michael believes that the CTA working group can be a trusted source.
Applying AI to Government Needs
AI can be applied to many different government use cases to improve the efficiency and productivity of government activities. In one example, AI might be able to solve some of the issues facing the expensive and challenging healthcare environment in the US. One way is to use AI to take a large amount of data that is being used by medical devices in hospital settings to allow doctors to use this information in real time and inform healthcare decision-making and activities. The use of AI and machine learning in this way can help doctors analyze data in real time, and in the process, augment their decisions, lower error rates, and provide prompt analysis or diagnosis.
Likewise, AI can help make government systems more resilient in the face of emerging cyberattacks. AI is already being applied to detect patterns of cyber attacks earlier than human operators and automatically mitigate those issues. Furthermore, agencies such as the General Services Administration (GSA), Internal Revenue Service (IRS), and many other agencies are seeing huge returns from automating business processes and using AI to augment decision-making and predictive analytics. Indeed, worldwide governments believe in the long-term strategic value of AI and are making significant efforts to grow their local AI ecosystems.
As AI continues to become more ubiquitous and as we continue to see it in more aspects of our everyday lives, society will experience extraordinary benefits. However, at the same time organizations are realizing that there are an increasing number of new issues raised by the use of AI. These issues include workforce job disruption, the use of facial recognition technology, challenges with ethics and biased data, the use of autonomous vehicles and even autonomous weapons, and other aspects of algorithmic decision-making. AI will have a disruptive effect on some peoples jobs but it will also have a huge benefit on other jobs, such as in the medical field. These workforce issues also extend to new requirements for education. The workforce of the future needs to be educated to have AI-ready skills to face the new AI realities of the future.
The future of AI depends on striking a balance between the promises of more intelligent machines helping us in our daily lives and addressing the concerns of potential disruption. This balance needs to be reflected in a regulatory structure that is flexible and can make sure the industry can thrive. Adoption for AI and cognitive technologies shows no signs of slowing and governments are paying attention.
read more at http://www.forbes.com/entrepreneurs/ by Kathleen Walch, Contributor