Tuesday, September 21, 2021
HomeTechnologyRPA Get Smarter - Ethics and Transparency Should be Top of Mind

RPA Get Smarter – Ethics and Transparency Should be Top of Mind

The earliest incarnations of robotic process automation (or RPA) technologies followed basic rules. These systems were akin to user interface testing tools in which, instead of a human operator clicking areas of the screen, software (or a “robot” as it was called) would do it. the place. This freed up the user’s time spent on very low-level tasks such as scraping screen content, copying and pasting, etc.

Although basic in functionality, these early RPA implementations provided clear benefits in terms of speed and efficiency. The tools evolved to encompass basic workflow automation over the following years, but the process was rigid with limited applicability across an enterprise.

Soon after 2000, automation companies such as UiPath, Automation Anywhere, and Blue Prism were founded (although some with different names when they were first incarnated). With a clear focus on the automation space, these companies began to make significant inroads into the enterprise automation space.

RPA gets smarter

Over the years, the functionality of RPA systems has increased dramatically. They’re no longer the rigid tools of their earliest incarnations, but instead offer much smarter process automation. UiPath, for example, lists 20 automation products on its website in groups like Discover, Build, Manage, Run & Engage. Their competitors also offer comprehensive offers.

The use cases for robotic process automation are now many and varied. For example, with built-in smart technology, rather than just clicking regions on the screen, systems can now automatically extract the content of invoices (or other data submitted by the customer) and convert it to an online format. structured database. These smart features may well be powered by forms of artificial intelligence, albeit hidden under the hood of the RPA app itself. Automation Anywhere has a good example of this specific use case.

Given the breadth of use cases now addressed by RPA technologies in enterprises, it is difficult to see a path of product development and expansion that does not add more AI functionality to RPA tools. themselves. While still being delivered in the Robotic package.

Process automation software, it is likely that this functionality will change from being hidden under the hood and used to power specific use cases in RPA software (such as content extraction) to function fully fledged and accessible. to the user.

The vagueness of AI and RPA

RPA vendors will be competing with AI vendors who sell automated machine learning software to the enterprise. Called AutoML, these tools allow users with little or no data science experience (often called citizen data scientists) to create custom AI models with their data. These models aren’t limited to specifically defined use cases, but can be anything business users want (and have the supporting data to) build.

With our example above, once the data is extracted from the invoices, why not let the customer build a custom AI model to prioritize these invoices without bringing in or connecting to 3 more.rd party AI tool? This is the next logical step in the RPA market; some leaders in the space already have some of these features in place.

This blurring of the lines between robotic process automation and artificial intelligence is particularly relevant right now as, alongside specialist RPA vendors, established technology companies such as Microsoft are bringing their own RPA solutions to market at weak code. Taking Microsoft as an example, it has a long history with artificial intelligence. Via Azure, its many different AI tools, including tools for creating custom AI models and a dedicated AutoML solution. The most relevant is the desire to combine their products to make unique value propositions. In our context here, that means it’s likely that low-code RPA and Azure’s AI technologies will be closely aligned.

The evolution of the debate on the ethics of AI

As RPA and AI technologies evolve, discussions, and in some jurisdictions, regulations, on the ethics of AI systems. There are legitimate concerns raised about the ethics of AI and the diversity of organizations that build AI.

In general, these discussions and regulations aim to ensure that AI systems are built, deployed and used in a fair, transparent and accountable manner. There are essential organizational and ethical reasons to ensure that your AI systems behave ethically.

When systems are built that run on data that represents people (such as in HR, finance, health, insurance, etc.), systems must be transparent and impartial; even beyond the use cases built with the data of people, organizations now require transparency in their AI in order to be able to effectively assess the operational risks associated with the deployment of this AI in their company.

A typical approach is to define the ethical principles of the business, create or adopt an ethical AI framework, and continually assess your AI systems against this framework and ethical principles.

As with RPA, the development of AI models can be outsourced to 3rd party businesses. Thus, assessing the transparency and ethics of these systems becomes even more important given the lack of understanding of the construction process.

However, most public and organizational discussions of ethics usually only take place in the context of artificial intelligence (where media headlines are usually concentrated). For this reason, developers and users of RPA systems might feel that these ethical concerns may not apply to them, as they “only” work with process automation software.

Automation can impact people’s lives

If we go back to our example of invoice processing used before, we saw the potential for a custom AI model in RPA software to automatically prioritize which invoices to pay. The technological change would be minor to change this use case into one in healthcare that prioritizes medicare claims over bills.

RPA technology could still automatically extract data from claim documents and translate them into a structured format. The business could then train a custom classification model (using historical claims data) to prioritize payments or, conversely, flag payments to be put on hold pending review.

However, here the ethical concerns should now be very apparent. The decision made by this model, contained in the RPA software, will directly affect the health and finances of individuals.

As seen in this example, what may appear to be relatively benign automation software is in fact evolving to reduce (or potentially completely remove) humans from the loop of critical decisions that impact people’s lives. The technology may or may not be explicitly labeled and sold as Artificial Intelligence; however, notions of ethics should always be a priority.

We need a different goal

Perhaps it is best to view these ethical concerns, not through the lens of AI, but rather from the angle of automated algorithmic decision-making.

The reality is, it’s not just the fact that AI technology can make decisions that should be of concern, but in fact any automated approach that doesn’t have sufficient human oversight (whether it’s powered by a rules-based system, Robotic Process Automation, superficial machine learning or complex deep learning for example).

Indeed, if you look at the recently announced ethics, transparency and accountability framework in the UK, which targets the public sector, you will see that it focuses on ‘automated decision-making’. According to the guidance document, “Automated decision making refers to both uniquely automated decisions (no human judgment) and automated assisted decision making (human judgment support). “

Likewise, the GDPR has been in force in the European Union for some time now, clearly spelling out the rights of individuals regarding automated individual decision-making. The European Commission gives the following definition: “Decision making based solely on automated means occurs when decisions are made about you by technological means and without any human involvement.

Finally, the State of California proposed in 2020 the Automated Decision Systems Accountability Act with similar objectives and definitions. In this law, artificial intelligence (but not explicitly the automation of robotic processes) is called: “”Automated Decision System “or” ADS “means a computational process, including a process derived from machine learning, statistics or other data processing or artificial intelligence techniques, which makes a decision or facilitates the decision. human decision-making, which has an impact on people”With assessment of accuracy, fairness, bias, discrimination, confidentiality and security. It is therefore clear that the principle of the more general lens is recognized in the development of public policies.

Businesses should apply governance to RPA too

As organizations put in place teams, processes, and technologies to govern the development and use of AI within their organization, these need to be extended to include all automated decision systems. To reduce the load and facilitate large-scale operation in large organizations, there shouldn’t be one set of processes and tools for RPA and one for AI (or indeed, for every model of business). IA).

This would result in a huge manual process to pull together the relevant information, make that information comparable and map it to the chosen process framework. Instead, a unified approach should enable a common set of controls that lead to informed decision making and approvals.

This should also not be seen as incompatible with the adoption of RPA or AI; clear guidelines and approvals allow teams to move forward with implementation, knowing the limits within which they can operate. When you use the more general objective, rather than one that targets only AI, the implication becomes clear; Ethics must be a priority for the developers and users of all automated decision-making systems, and not just for AI, which includes the automation of robotic processes.

Image credit: pixabay; pexels; Thank you!

Stuart battersby

CTO @ Chatterbox Labs

Dr. Stuart Battersby is a technology leader and CTO of Chatterbox Labs. With a PhD in Cognitive Science from Queen Mary, Stuart University of London now leads all research and technical development for Chatterbox’s AIMI ethical AI platform.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments