Steamrail Weekender to Maldon Victoria (July 31st to August 2nd)
Vietnam Tour - Travelling by private train on the legendary Reunification Express
QPSR Troop Train
Stunning views on a retro rail trip
Garratt coming to Southern States in 2015
The Outer Circle Line comes to ACMI Melbourne
Australasian Rail Industry Awards Website launched & Dates announced
Geelong & Ballarat Rail 150 – April 2012
Rail Revival Alliance to meet with Louise Staley Member for Ripon
AI and deep learning brings a significantly higher level of human automation at scale – whether it is automation of insight analysis or automation of manual and repetitive human tasks. Thanks to its ability to learn from significantly larger datasets, it is possible to build systems that can outperform humans in isolated vertical tasks. We have seen AI outperform humans in tasks such as predictive maintenance, document classification, passenger counting and beyond.
Recently however, there has been several big challenges that have been brought to light with deep learning. Three of the main challenges I want to address here are:
These three areas are of critical importance in order to ensure the safe and efficient development and deployment of deep learning systems in the sector. For simplicity, we will explain these challenges with an example: We will assume we are automating the customer support process using a deep learning chatbot system.
Collaboration between machine learning experts, policy experts and industry domain experts will enable large-scale deployments of deep learning systems to flourish
For starters, let’s assume we have been able to train our deep learning chatbot system by giving it a large number of data examples that have been gathered. Once it learns from the data, it can perform the desired task of answering our customers’ questions. The challenge is that throughout the development of this deep learning model, multiple unnoticed biases were introduced. Whether it is due to incorrect examples, or inherently biased datasets, the system will always carry an inherent bias. Because of this, it will be our objective to make sure that we mitigate large negative effects of undesired bias. If we don’t do this, we will end up with undesired behaviours that may have large negative impacts on the business. There has been high profile cases where undesired bias has gone unnoticed such as Microsoft’s ‘racist’ chatbot1, and Amazon’s ‘sexist’ recruitment platform2, both which accidentally showed negative behaviour due to unnoticed bias.
Fortunately, it is possible to identify and mitigate an undesired bias to a reasonable level by ensuring the right metrics are being used for the evaluation of the system, and the right monitoring is put in place to ensure performance is achieved. The Institute of Electrical and Electronics Engineers (IEEE) is currently leading a standard in algorithmic bias considerations3 which will allow for formal standard definitions on bias in deep learning.
This article first appeared on www.globalrailwayreview.com
About this website
Railpage version 3.10.0.0037
All logos and trademarks in this site are property of their respective owner. The comments are property of their posters, all the rest is © 2003-2019 Interactive Omnimedia Pty Ltd.
You can syndicate our news using one of the RSS feeds.