AI with IQ
Artificial intelligence and machine learning are supposed to provide valuable assistance in our daily work routine. But they can also reinforce discrimination and prejudices. Anyone who uses them must therefore be aware of their weaknesses.

The usage of AI must not cause individuals to be disadvantaged. Photo: Treety – istock photo
Everyone is talking about it, but it often evokes false associations: Artificial intelligence (AI) is fundamentally different from human intelligence. Even in ten or twenty years, machines will have no consciousness and no free will. Nevertheless, computer systems with technology that IT specialists describe more precisely as machine learning are already doing astonishing things today.
What machines can learn from large amounts of training data is how to recognize patterns – differences or similarities. The capabilities of systems have grown rapidly in recent years – especially when machine learning based on data analysis becomes deep learning. Then the learning algorithms independently develop decision spaces in so-called neuronal networks, without people being able to see from the outside what the system bases its decisions on.
Especially at the beginning of AI development, results were considered to be most important. Researchers were happy when computers were able to distinguish a flower from a bird. The next step was to distinguish a flower from a tree and then a beech from a lime tree. A few years later, the US travel platform Kayak was already using AI to analyze customers’ hotel photos in order to determine automatically whether a hotel had a fitness center or a swimming pool.

Human programmers monitor the training of the learning systems. Photo: Treety – istock photo
Today, people have high hopes for AI systems in regards to challenges such as traffic control, logistics optimization, or autonomous driving. The popular digital language assistants Alexa, Google Assistant, and Siri are largely based on machine learning. AI is also gaining ground in medical applications. Even now, AI systems often achieve a better success rate than doctors when searching for disease indicators in computer tomography or X-ray images. In early cancer detection, machine pattern recognition already achieves astonishing results and should improve diagnoses within five to ten years to such an extent that the chances of recovery for affected patients increase considerably.
Quality assurance with AI
AI systems have also long since caught on in industry and everyday working life. Machine manufacturers such as Trumpf or elevator expert Thyssen-Krupp use machine learning for predictive maintenance. Data analyses provide precise predictions as to when a production machine or elevator will probably break down and enable the operator to organize maintenance measures in a timely manner. With Visual Insights, IBM provides industrial companies with a turnkey solution that helps the manufacturing industry with quality assurance. To do so, an image recognition AI analyzes the manufactured products directly on the assembly line.
But the use of AI in the working world is by no means limited to machines – working people are also the focus of artificial intelligence. Founded in 2017, the start-up Motion Miners uses wearables and external sensors such as positioning beacons and machine learning to analyze the movements of industrial workers. The goal is to identify and avoid unhealthy and ergonomically detrimental postures. Co-founder Sascha Feldhorst explains how pattern recognition is revolutionizing his work. “Artificial intelligence is far superior to the classical methods of process analysis,” says Feldhorst. “Where we used to be able to observe maybe two or three employees in the course of a day with a clipboard, we now analyze an entire department in the same amount of time.”
Motion Miners emphasizes that the analyses take place anonymously and are supposed to serve only the well-being of the employees. The founders and developers at Motion Miners want to actively counteract the danger of data collection potentially also being used for surveillance and performance monitoring.

AI applications support HR managers with applicant management. Photo: Treety – istock photo
Because, especially in the work environment, AI applications regularly show that, what is useful to companies, can certainly have problematic consequences for the affected employees. For example, several software companies offer AI-based solutions that support personnel departments in the pre-screening of incoming applications. This may be beneficial from the point of view of an overburdened HR administrator, who has to deal with 500 incoming applications, but it does not guarantee a fair procedure for every applicant. Amazon made this experience in 2014, when they were testing a procedure in which an AI system ranked the applications received. After some time, the HR department noticed that the system had systematically put women at a disadvantage. The deep learning system came to the conclusion that the majority of men with an affinity for technology were more enthusiastic about the company than female applicants – and its selection was probably based primarily on this factor.
A matter of transparency
The example illustrates a major problem with the use of AI: Companies and individuals who make use of it don’t know what the system bases its decisions on. John Cohn has been involved with artificial intelligence for over 40 years and is now researching its use at IBM. He warns emphatically of the lack of transparency, especially in deep learning. “In the past, we programmed a code and were able to observe what it did. When a problem arose, we were able to understand exactly why. That no longer works with AI. Today we don’t know why machines come to a certain decision.” One must question under which conditions these systems are controllable.
Definition of Terms
Artificial intelligence (AI) is a catch-all term for for technological techniques that deal with perception, reasoning, learning and behavior. Machine learning (ML) is a subdiscipline of AI, and describes methods that allow machines to independently generate knowledge from experience. Deep learning (DL) describes self-learning systems. DL is based on multiple levels of artificial neural networks and is a part of ML (Source: NVIDIA).rman-speaking countries, an international expansion is planned.

Graphic: designed by Freepik
Meanwhile, AI experts know many examples where deep learning systems did not, as hoped, make the objectively best decision, but simply depicted human prejudices. A famous example is the one in which a Google image recognition algorithm was used to distinguish humans from animals – and some dark-skinned test subjects in the photos were classified as gorillas. But what was striking in this case can also have a more subtle effect – and may thus escape observation by human programmers or users. Because discrimination and prejudice already creep in during training of learning systems. This applies to machine learning monitored by programmers as well as to deep learning based on the black box principle. IBM researcher John Cohn explains the connection: “The data sets used to teach the algorithms are usually historical data. Conscious or unconscious discrimination in the past then distorts pattern recognition or decision-making processes.” AI systems would even reinforce discrimination and prejudice with a high degree of probability.
Automated decision-making

When using AI-supported systems, the human operator should maintain ultimate control. Photo: Treety – istock photo
Seen in this light, some AI-based decision-making seems problematic. For example, when banks use deep learning algorithms to assess the creditworthiness of their customers – as has been the case for decades with scoring methods, but which gains new dimensions through AI solutions.
Or when AI-based, so-called anti-fraud systems support insurance employees in detecting possible cases of fraud. Various insurance companies rely on AI-based voice computers for customer service – and make an initial assessment on the likelihood of fraud based on queries when someone reports damage via telephone. Such examples clearly show the thin dividing line between the legitimate interest of companies and the general public on the one hand, and the danger that AI algorithms make wrong decisions to the detriment of individuals who can hardly defend themselves against them on the other hand. Organized insurance fraud costs European insurers tens of billions every year. Understandably, the industry celebrated it as a success when, for example, the French insurance group Axa used AI to track down a fraud ring, who had lied about motor vehicle claims, or depicted them as worse than they were with image processing programs. The business model of a fraudulent clan in Sicily was particularly brutal: In return for a promise of money, people in need had their bones broken and then fictitious accidents were reported in order to collect money from the insurers. When analysis software discovered suspicious patterns in these accident reports, a loss of two million euros had already been accrued.
Preventative Risk Assessment
So how do we deal with the fascinating but also problematic possibilities of learning machines? Claudia Nemat, Chief Innovation & Technology Officer at Deutsche Telekom, says: “In general, algorithms and data should never be in the hands of just a few people – not of powerful technology companies nor governments. Rather, we need to develop an understanding that data belongs to us as people.” It is also crucial that users are always aware of the potential weaknesses of AI systems. Ralph Müller-Eiselt, head of the Bertelsmann Foundation’s working group on political challenges and opportunities in a digitized world, emphasizes: “The greater the potential impact of automated decision-making, the more important a preventive risk assessment and a comprehensive review of the results become.” People must always have the last word – and should view decisions made by machines with a hearty dollop of skepticism.
DEKRA and training specialist SoSafe focus on the human factor in IT security

Photo: DEKRA
According to a survey conducted by the German Federal Office for Information Security (BSI), around 90 percent of attacks with malware in 2018 took place via e-mail. About one in ten e-mails containing malware passes unhindered through technical barriers such as spam filters or firewalls. These figures also show: Machine systems must always be supplemented by competent people. Humans must be the last bastion to recognize suspicious and dangerous messages. This is why the DEKRA Academy and SoSafe, a company specializing in cyber security training, are cooperating. Together they offer training courses that sensitize employees to the dangers of phishing e-mails and social engineering. The Security and Awareness Program trains employees to supplement and improve the security provided by machines with their own decisions and measures. The cooperation will initially apply to German-speaking countries, an international expansion is planned.
Discovering New Layers in Photovoltaics
Photovoltaics are a beacon of hope for shaping a future free of CO2. Yet the technology will only reach its full potential if it manages to considerably improve its efficiency. European scientists are leading the way in the quest for greater efficiency and sustainability.
This is How We Catch the Sun
Rooftop systems already account for around 40 percent of photovoltaics installed worldwide. But why not dream bigger? Why not step up the pace of rooftop PV expansion to reduce greenhouse gas emissions and put an end to energy poverty? An international study has calculated the true potential of rooftop systems worldwide.