Dienstag, 22. September 2020

08:00 Uhr
Registrierung der Teilnehmer
08:45 Uhr
Begrüßung durch den Veranstalter
08:50 Uhr
Edge Security mehr
In 2017, Microsoft introduced a new standard for IoT security by releasing the white paper, “The seven properties of highly secured devices.” The paper argued, based on an analysis of best-inclass devices, that seven properties must be present on every standalone device that connects to the internet.

Some of these properties, like the presence of a hardware-based root of trust or compartmentalization, require certain silicon features. Others, like defense in depth, require a certain software architecture, as well as the presence of other properties, like a hardware-based root of trust. Finally, other properties, such as renewable security, certificate-based authentication, and failure reporting, require not only silicon features and certain software architecture choices within the operating system, but also deep integration with cloud services. Piecing these critical pieces of infrastructure together is difficult and prone to errors.

Ensuring that a device meets these properties could therefore increase its cost. This led us to believe that the seven properties also introduced an opportunity for security-minded companies to implement these properties as a platform, freeing device manufacturers to focus on product features, rather than security. Azure Sphere is Microsoft’s entry into the market with a seven-properties-compliant, end-to-end, product offering.

Referent: Jürgen Schwertl | Microsoft

Jürgen joined Microsoft in 1990. In Windows Program Management he helped shape Windows releases over almost a quarter-century. Moving into an Architect role in Microsoft Services, he began connecting "things" to the cloud, planning and delivering industrial IoT Solutions. In the One Microsoft IoT & MR team he is now enabling partners to build innovative IoT solutions secured from the chip to the cloud.

09:30 Uhr
ML-basierte Sensoren verändern die Edge mehr
Der Einsatz von Machine-Learning-Algorithmen direkt im Sensor oder in unmittelbarer Nähe einer Sensorik bietet sehr viele neue Möglichkeiten, gerade im Bereich der Echtzeitdatenanalysen zur Mustererkennung. Andererseits entstehen dadurch aber auch einige neue Herausforderungen, die in ihrer Gesamtkomplexität nicht zu unterschätzen sind.

Der Beitrag greift mit „stark beschränkte Hardware-Ressourcen“ und „Remote-Update-Fähigkeit der ML-Modelle“ zwei dieser Themen auf. An Hand eines virtuellen Condition-Monitoring-Sensors wird beispielhaft aufgezeigt, wie man die Herausforderungen erfolgreich lösen kann.
Referent: Klaus-Dieter Walter | SSV

Klaus-Dieter Walter ist CEO der SSV Software Systems GmbH in Hannover und durch Vorträge auf internationalen Veranstaltungen sowie Beiträge in Fachzeitschriften bekannt und hat vier Fachbücher zum Themenbereich Embedded Systeme veröffentlicht. 2007 hat er den M2M Alliance e.V. mitgegründet und war viele Jahre im Vorstand. Außerdem ist er Vorstandsmitglied des Industrieforums VHPready, um einen Kommunikationsstandard für Virtuelle Kraftwerke zu schaffen und ist seit 2012 in der Expertengruppe Internet der Dinge innerhalb der Fokusgruppe Intelligente Vernetzung des Digital Gipfel der Bundesregierung.

10:10 Uhr
Kaffeepause und Besuch der Ausstellung
10:40 Uhr
Performance Analysis and Bottlenecks of AI on the Edge mehr
Artificial intelligence has been succesfully deployed to numerous types of platforms. The performance and architecture of devices used in the field is often different from the platforms in the data centers. There are also vast differences in system requirements that drive the overall system design.

What can we do to get necessary performance, but keep the accuracy and not deplete the battery in a matter of seconds? We will discuss a few, selected architectures with models and several techniques of optimizing algorithms, mostly focused on deep learning.

Referent: Lukasz Grzymkowski | Arrow

Lukasz Grzymkowski is working with Arrow as Technical Lead for Embedded Software. He started his career as a software engineer for Intel in the Data Center Group and joined Arrow in early 2018. In parallel, he is currently working on his Ph.D. degree focused on artificial intelligence and control theory.

11:20 Uhr
Guiding AI to the Application Edge mehr

As AI methods have matured in datacenters and on industrial PCs, we see people eager to apply them on embedded devices close to the application. The step from datacenters into constrained devices brings both challenges and opportunities.

In this talk we will consider the different mindsets needed for both environments and highlight challenges, tools and solutions to ease the transition.

Referent: Dr. Nicolas Lehment | NXP

Nicolas Lehment is a systems architect at NXP’s Industrial Competency Center, where he advises on strategic topics such as ML/AI, connectivity and safety for industrial automation. Before joining NXP, he designed cutting-edge computer vision and robotics systems for ABB and Smartray. He’s collaborated on research papers for topics ranging from ML-driven video classification over human pose tracking to collaborative robotics. This academic work earned him a doctoral degree at the Technical University of Munich.

12:05 Uhr
Gemeinsame Keynote ASE & i-edge: Maschinelles Lernen on-the-edge: Alles andere als Stand der Technik
Referent: Prof. Dr. Oliver Niggemann | Universität der Bundeswehr Hamburg

Oliver Niggemann studierte Informatik an der Universität Paderborn, wo er 2001 mit dem Thema „Visual Data Mining of Graph-Based Data“ promoviert wurde. Anschließend arbeitete er als Software-Entwickler bei der Firma Acterna in der Telekommunikationsbranche. Bis 2008 war als Lead-Produktmanager bei der Firma dSPACE tätig. Niggemann war aktiv im AUTOSAR-Gremium und bis 2008 Beiratsvorsitzende des s-labs der Universitat Paderborn. 2008 folgte Niggemann dem Ruf auf die neu eingerichtete Professur für Technische Informatik an der Hochschule Ostwestfalen-Lippe, wo er das Labor für „Artificial Intelligence in Automation“ leitete. Von 2008 bis 2019 war er Vorstandsmitglied des Instituts für industrielle Informationstechnologien (inIT). Bis 2019 war Niggemann auch stellv. Leiter des Fraunhofer IOSB-INA Institutsteil für industrielle Automation. Am 1. April 2019 übernahm Niggemann die Universitätsprofessur „Informatik im Maschinenbau“ an der Helmut-Schmidt-Universität der Bundeswehr Hamburg. Dort forscht er am Institut für Automatisierungstechnik im Bereich der Künstlichen Intelligenz und des Maschinellen Lernens für Cyber-Physische Systeme.

12:45 Uhr
Mittagspause und Besuch der Ausstellung
13:45 Uhr
Enable your Project with AI – An Approach to Support System Design and Architecting with AI-specific Properties mehr

Many companies struggle with the decision of how to engage artificial intelligence for their products. During the speech, Robin Roitsch will introduce a systematic approach to check specific AI-properties and their current status within a project. The method provides an overview of particular properties that need to be addressed during the design and development of AI applications. It also shows how to engage these topics in case it was not under consideration yet. Furthermore, it assists in evaluating these solutions towards stakeholders requirements.

Research and development in the area of artificial intelligence started in the 1950s. However, the actual integration and deployment for small- and medium-sized embedded systems companies is still a widely unestablished process. The main reasons for this are: The lack of an understanding of AI technology and AI-specific properties itself leads to the lack of knowledge about potential application areas and the uncertainty of whether a migration towards an AI-based solution will benefit.

The main problem in applying AI in the industry is a systematic approach to mapping the concerns of stakeholders to potential AI-enabled solutions, which, from the architectural point of view, also discusses consequences in terms of benefits and drawbacks.

Robin's work aims to support decision-makers who want to adopt AI's potential for their projects. He created an approach to enhance standard requirements engineering techniques such as the Adequacy Check with AI-specific property checks. This check allows us to grow awareness and confidence about specific processes and steps that need to be considered throughout an AI-based development. Furthermore, it assists in identifying shortcomings of the current status of your architecture and provide hints on how to engage it. The approach does not necessarily require an input but can straightforwardly be used as a support tool for the requirements elicitation process when it comes to AI-specific needs.

Referent: Robin Roitsch | NVIDIA

Working as Business Development Manager in the domain of embedded and industrial systems, Robin's daily challenge is to update customers with the latest upcoming technologies and trends - especially in reasonably new technology like artificial intelligence, which is going beyond traditional HW/SW approaches. As an NVIDIA employee and former Technical Feld Application Engineer at Arrow Electronics, Robin engages many different customers with a wide range of various projects and use cases. When it comes to AI, his primary task is to identify the current status of the customers' project and to provide support in case of fundamental questions, deep-dive technical assistance, and general shortcomings throughout the complete project lifetime.

14:25 Uhr
AI at the Edge – Enabling Time Critical Video Analytics mehr
Advances in computing technology and AI algorithms have made it possible to perform Edge Video Analysis (EVA) on location in real-time. Discover the solutions which deliver the right compute for time critical use cases in industries ranging from traffic monitoring to medical diagnosis and security surveillance.

Taking AI enabled Video Analysis to the Edge in order to support real-time data delivery and decision making. It requires extremely powerful microprocessor units (MPUs) combined with graphics processing units (GPUs) – which can accelerate compute-intensive applications by spreading computing workloads over multiple cores. Hear more about powerful Intel microprocessors which are dramatically boosted by the addition of NVIDIA GPUs; and deep learning platforms based on NVIDIA’s Jetson family provide a quick start for autonomous machine development.

Referent: Marco Krause | Adlink

Marco Krause is Global Account Director at Adlink. He is responsible for global TIER1,2 accounts and heading the CEE team. Marco has 15+ years of experience in the Embedded, IPC, Distribution & IT environment. His special interest is in AI/KI related topics and applications (i.e. robotics, AGV´s, autonomous cars).

15:05 Uhr
Kaffeepause und Besuch der Ausstellung
15:30 Uhr
Integrating Connectivity, Computing, and Peripheral Functions at the IoT Edge mehr
This talk examines the challenges that embedded system developers face in implementing edge computing functions using existing compute/control components, and proposes a new platform approach to accelerate development and enhance edge computing systems‘ performance.

The instinct of embedded system developers when they start developing a new edge computing device is to base it on the type of computing component they are most familiar with – a microcontroller, an FPGA, or an applications processor. In terms of raw processing horsepower, products are available in all of these categories that can handle the computing workload of machine learning or other AI applications. But the implementation of edge computing designs throws up different, and more difficult, problems than conventional MCU- or processor-based architectures face. And the root of these difficulties lies in the need to seamlessly combine multiple functions in a single system.

These functions include:
•    Connectivity – getting products to work seamlessly in a field of multiple wireless technologies
•    Security - compliance with emerging privacy and security requirements
•    Human-machine interface - aesthetically attractive industrial design
•    The user experience - making technology plug-and-play, while delivering behind-the-scenes software updates

At the same time, manufacturers must also consider other factors beyond the device itself, including:
•    Secure, scalable device management with easy on-boarding supporting major platforms or in-house servers
•    Integration - making disparate technologies work together seamlessly
•    Cloud support - secure, scalable device management with easy on-boarding
•    Monetization - enhanced profitability through reduced support costs while providing for secure lifecycle management
•    Low-power operation – minimizing heat dissipation while addressing environmental issues

Individual components on their own fail to provide an ecosystem to support rapid development of a system which includes all necessary functional elements, while also meeting the marketing specifications for the user experience, monetization and so on.

In this session, we will describe how a new platform approach can simplify the development of IoT edge devices, helping embedded product manufacturers to get to market more quickly, while enhancing the performance and strengthening the security of connected devices at the edge of the network. We will also describe the essential components of such a platform, including pre-certified, low-power solutions for connectivity, security, device management and middleware. The speaker will describe these features drawing references to Cypress‘ IoT-AdvantEdge system, a roadmap for IoT edge device development that includes secure compute and connect solutions, IoT development kits, improved APIs, tools and support, partner certifications, online IoT community resources, and investments in standards-based security initiatives to help unify the growing market.

Referent: Robert Conant | Cypress
16:15 Uhr
MIOTY – The New LPWAN Standard for Sub-1 GHz Communication mehr
Parameters such as long-range and low-power connected devices are important for communication. In order to be able to communicate over long distances, technology is optimized for long-range radio-frequency (RF) communication. The Massive Internet of Things (MIOTY) protocol enables the division of data packets into smaller subpackets: They spend less time in the air and therefore the risk of collisions and of data loss decreases.

While communication speed for wireless technologies has been a priority for decades, the focus has started to shift to other parameters such as long-range and low-power connected devices. Networks with this focus are often referred to as low-power wide-area networks (LPWAN); Sensors in LPWAN networks communicate infrequently and can sleep from minutes to hours between every transmission. In such applications, high data throughput is not as important as being able to communicate over long distances, which is why the technology is optimized for long-range radio-frequency (RF) communication.

The Massive Internet of Things (MIOTY) protocol enables the division of data packets into smaller subpackets, causing them to spend less time in the air and therefore decreasing the risk of collisions and of data loss.The MIOTY solution offers a star network for low-power end nodes as well as a gateway solution for cloud connectivity with a complete long-range and low-power solution for worldwide Sub-1 GHz communication. The goal of the MIOTY Alliance, the governing body of MIOTY, is to enable the most accessible, robust and efficient connectivity solution on the market. The MIOTY protocol operates in license-free bands around the world, and there are no costs involved in using the radio spectrum, unlike narrowband IoT solutions.

The physical layer (PHY) and link layer of MIOTY technology are based on a publicly available document: the TS 103 357 public technical standard (TS) from the European Telecommunications Standards Institute (ETSI). Eliminating the risk of vendor lock-in, MIOTY has already been tested with three independent silicon providers, including Texas Instruments (TI) using the CC1310 microcontroller (MCU). As of today, MIOTY offers a private network but the expectation is that third parties will also offer a network solution as a service.

The strongest advantage with MIOTY is the TSMA method, splitting up the data into smaller equally sized subpackets that are distributed over the time and frequency domains. This method creates a technology that is less susceptible to interference while also being friendlier to other radio systems. Because the packets are sent in smaller radio bursts, they spend less time in air, resulting in lower power consumption and a longer battery life.

MIOTY can help overcome performance degradation in high-node-count networks and reaching remote sensors. Using MIOTY is suitable for a wide range of battery operated applications that requires high density of sensors and small data volumes.



Referent: Elin Wollert | Texas Instruments

Elin Wollert is an applications engineer at Texas Instruments in Norway. TI is a founding member of the MIOTY Alliance, and in Elin's role as project manager for MIOTY at TI, she is pushing the MIOTY technology forward.

17:00 Uhr
Ende des ersten Veranstaltungstages
18:00 Uhr
Gemeinsame Abendveranstaltung mit Networking

Mittwoch, 23. September 2020

08:00 Uhr
Registrierung der Teilnehmer
08:45 Uhr
Begrüßung durch den Veranstalter


08:50 Uhr

Track 1 Vormittags
08:50 Uhr: DevOPs for Machine Learning at the Edge mehr
If you work with code at all, you’ve probably heard of DevOps. An approach to application lifecycle management, it employs an (ideally, fully automated) continuous integration / continuous deployment (CI/CD) pipeline to streamline the process of building, testing, and deploying new code into a production environment.

I am going to cover in my session the DevOps approach for ML scenarios. After you have the model you want, you’ll need to address model packaging, which involves capturing the dependencies required for the model to run in its target inferencing environment (at the edge this is typically a device). Containerization is the obvious choice; containers are the de facto unit of execution today across both the cloud and intelligent edge. You’ll also want to consider model formats that are agnostic of training and serving fabric, which is where reusable formats such as Open Neural Network Exchange (ONNX) can be useful.
Referent: Veronika Zellner | Microsoft

Veronika Zellner is an Architect for Data & AI at Microsoft and passionate about all data-related topics. Especially Machine Learning in the cloud and on the edge. She has several years of experience in the Business Intelligence and IT consulting. Currently, Veronika is working at Microsoft in Munich, Germany.

09:30 Uhr: Running AI Application on Limited-Resource Hardware mehr

AI application requires a lot of computation power and memory footprint. That is why, it is not new that, to deploying AI at the edge, it is necessary to have high-end processors, FPGAs or GPUs. Until recently, low-end devices receive a boost in the performance, which enables them to run an AI application with limited features. This talk will give the audience an overview of the development states, software aspects, hardware aspects that enable them to run an AI application.

Running AI at the edge allows the user to run AI applications locally on the hardware without any intervention from the cloud. The benefits are fast response time, less bandwidth consumption, and the devices can be deployed in a harsh environment, where the network facility may not be available. AI application requires a lot of computational power and memory footprint, which is available only on high-end devices like MPUs, GPUs, VPUs, and FPGAs. The low-end devices, mostly MCUs, are defined as the devices running bare metal (no operating system), having the clock frequency under 1GHz, and having limited memory. Consequently, deploying AI applications on those devices is out of the question.

Until recently, the low-end devices' performance has received a performance boost-up, which, some up them, can reach the clock frequency of 600MHz. Therefore, the idea of bringing AI capability to those low-end devices start to root up.

During this speak, we will discuss multiple approaches to provide a big picture of what is currently developed, what is the trend and where the limitation is. We will look at the new Arm Cortex-M55 processor, which is a new AI-capable microcontroller. We examine the aspects of that MCU to see how it helps to boost up the AI performance. Then, we will look at the software adaptions (TensorFlow Lite, CMSIS NN) for microcontrollers. Finally, we will look at the implementations coming from the silicon vendors.

Referent: Quang Hai Nguyen | Arrow

Quang Hai Nguyen is working as Technology Field Application Engineer at Arrow Central Europe GmbH. He joined Arrow in 2015 as a Junior Engineer in the Graduate program. After the Graduate program, Quang Hai worked as an Application Engineer focusing on microcontrollers. Since 2019, he has been working as a Technology Field Application Engineer for high-end processors and security in embedded systems. At the beginning of this year, he has taken up additional responsibility supporting customers with AI at the edge technology.

10:10 Uhr: Kaffeepause und Besuch der Ausstellung
10:40 Uhr: How 5G/TSN/Edge will Shape the Future of Industrial Networking mehr
Industrial communication systems evolve towards standardized Time-Sensitive Networking (TSN). However, the flexibilization of manufacturing processes often require independency of wired connectivity. The Edge Computing paradigm and a TSN over 5G technology stack are interesting approaches to address this challenge.

In industrial contexts, physics, domain requirements and regulatory frameworks pose some communication challenges in terms of dependability, response times and data protection. As a result, a number of approaches were developed using Operational Technology (OT). Within the last 70 years we have seen the evolution from current-loops, over field buses and industrial ethernet systems towards Time-Sensitive Networking (TSN).
However, the flexibilization within manufacturing processes, such as the use of Automatic Guided Vehicles (AGV), often oppose the use of cable-based communication. Applying the new distributed Cloud Computing paradigm (Edge Computing) allows to reduce communication demand and to increase autonomy. Yet, reliable, low-latency and deterministic wireless communication is needed, but its standardization is still in its infancy. Upcoming 3GPP specifications try to fill this gap (TSN over 5G) and to experimentally validate and demonstrate the features of these concepts, we are implementing a first research prototype of a TSN-enabled Standalone 5G Core for industrial Campus Networks. Together with the adoption of the Edge Computing paradigm and a management layer for such an infrastructure, we're thereby moving towards fully configurable, flexible, software-based communication infrastructures.
The aim of this talk is to raise curiosity about the application possibilities of these technologies and to facilitate further discussions.
Referent: Dr.-Ing. Alexander Willner | Fraunhofer FOKUS

Dr.-Ing. Alexander Willner is head of the Industrial Internet of Things (IIoT) Center at the Fraunhofer Institute for Open Communication Systems (FOKUS) and head of the IIoT research group within the chair of Next Generation Networks (AV) at the Technical University Berlin (TUB). In collaboration with the Berlin Center of Digital Transformation (BCDT), he works with his groups in applying standard-based Internet of Things (IoT) technologies to industrial domains, such as Industry 4.0. With a focus on moving towards the realization of software-based industrial communication infrastructures, the most important research areas include industrial real-time networks (e.g. TSN/5G), middleware systems (e.g. OPC UA), distributed AI (e.g. via Digital Twins) and distributed Cloud Computing (e.g. Edge Computing) including management and orchestration.

11:20 Uhr: Ethernet to the Edge in Industrial Systems mehr
Great insights and information resides within edge sensors and actuators on the factory floor or process control environment. Connectivity is the key to accessing and actioning these insights. This talk explores the potential connectivity solutions that will bring the intelligent edge to reality.

The benefits presented by accessing intelligence at the edge of industrial systems is clear, increased insight, better analytics capabilities, informed decisions, all leading to increased productivity. The network of digitally connected systems, machines, edge sensors & actuators, sharing information is central to the connected factory vision. To realize this ambition of highly connected, intelligent, and flexible manufacturing, new industrial connectivity solutions are required that enable connectivity with edge devices. There is a requirement for higher data bandwidth, more extended reach cabling, IP addressability, and increased power at edge devices.  Coupled with this challenge is the strain exerted on the existing industrial networks due to increased traffic flows from the myriad of potentially connected edge devices.  Today's networks have limited potential for expansion, and new technologies and techniques are required to meet the demands of our automation environments.

The need for seamless connectivity from every sensor or actuator, even those in remote locations, dictates a change in the industrial network and its associated control systems. This presentation discusses the transition from existing field bus technologies to the new 10BASE-T1L Ethernet technology (IEEE standard 802.3cg-2019 / 10BASE-T1L) within the process automation environment. It will outline Ethernet's use over single twisted pair cabling of up to 1km in length while also adhering to the intrinsically safe, Zone 0 requirement of specific applications. This talk will also explore the opportunities presented by enterprise-wide connectivity, while outlining the challenges on the horizon in connecting and leveraging edge intelligence. Connectivity will unleash the real power of edge intelligence, and this presentation will explore how we make this a reality.
Referent: Fiona Treacy | Analog Devices

Fiona Treacy is a strategic marketing manager for Process Control and Automation focused on Industrial Connectivity at Analog Devices. Before this role, Fiona led the Marketing effort for MeasureWare and other Precision Instrumentation initiatives. She also held positions in Application Engineering and Test Development. Fiona holds a BSc. in Applied Physics and an MBA from the University of Limerick.


08:50 Uhr

Track 2 Vormittags
08:50 Uhr: Concepts for Solving the IoT Puzzle mehr

Throughout this paper, we will talk about the what and how that makes IoT solutions complex; it’s not as simple as taking a puzzle you already have completed and adding a few additional pieces to expand the puzzle’s picture. The ability to capitalize on the already-established edge pieces, and then customize and integrate, will make the difference.
Through IoT solutions, machines are getting smarter, gaining context about where they are and what is around them to react. Essentially, we are making machines more human - connecting them to their  human counterparts so that together, they can do more than ever before. When those intelligent machines are integrated into the enterprise, IoT allows you to remove tasks, decrease complexity and confusion for the end-user, and drive accurate, data-driven decision making.

The collection and analysis of data is imperative to uncovering value with IoT - this may be the focus of your product or only an add-on that enhances the core functionality that your users expect. Either way, you will need to thoroughly think through each component that gives your device the “Internet of Things” label: data collection, hardware connectivity, communication protocol, and device-level security measures.

Referent: Ralf Pühler | Kuda

Ralf Pühler is President/CEO Europe of KUDA llc, and a business development as well as an operations professional. Understanding customer needs and priorities lead to continuous customer dialogue from the initial idea until market breakthrough incl. quantifying the value and impact of ideas by testing them on the market. Ralf has been connecting things to the Internet for years and has helped customers transform from traditional into connected processes, to optimize cost, eclipse the competition, and create new revenue. He believes in success through providing industry-relevant concepts for value-adding IoT solutions.

09:30 Uhr: AutoML – Ein Game Changer für die Skalierung von ML in der Produktion mehr

Machine Learning (ML) und Künstliche Intelligenz (KI) bilden die technische Grundlage für die Optimierung der Produktion und die Umsetzung von datenbasierten Services. Die Anwendung von KI und ML muss aber vereinfacht werden, um in der Breite das Nutzenpotential zu erschließen.

Die Entwicklung von ML-Lösungen ist zu großen Teilen ein manueller, kreativer und komplexer Prozess. Allein bei der Modellentwicklung gibt es bis zu 1040 Möglichkeiten ML-Verfahren, Feature oder Hyperparameter miteinander zu kombinieren. Dadurch ist der Modellbildungsprozess nur durch eine kleine Gruppe von Experten, den Data Scientists, leistbar. Es gibt zwar erste Software-Werkzeuge, um Teile des Modellbildungsprozesses zu automatisieren und den Data Scientists einen erheblichen Teil ihrer Arbeit abzunehmen. Zur Anwendung ist allerdings tiefgreifendes Wissen im Bereich ML notwendig. Darüber hinaus muss der Data Scientist mit dem Applikationsexperten sprechen, um die gefundenen Zusammenhänge in den Daten ingenieurmäßig bezüglich des Maschinenverhaltens oder des Produktionsprozesses zu interpretieren. Das Domänenwissen hat also einen ganz erheblichen Einfluss auf die Güte einer ML-Lösung.

Wenn das Domänenwissen also entscheidend ist und der Data Scientist ohnehin mit dem Applikationsexperten die gefunden Zusammenhänge in den Daten diskutieren muss, liegt es auf der Hand, Maschinen- und Prozessexperten gleich von vorneherein die Möglichkeit zu geben ML-Lösungen eigenständig zu erzeugen. Und zwar ohne selbst Data Scientist zu sein und ohne über Expertenwissen im Bereich KI oder ML zu verfügen.Der Beitrag stellt ein Konzept vor, das Applikationsexperten befähigt eigenständig auf Basis ihres Domänenwissens ML-Lösungen zu kreieren. Abgerundet wird der Beitrag durch ein Anwendungsbeispiel aus dem Maschinen- Anlagenbau, das demonstriert wie Domänenexperten in weniger als einer Stunde erste ML-Lösungen erzeugen können.

Referent: Tobias Gaukstern | Weidmüller

Tobias Gaukstern führt bei Weidmüller die Business Unit Industrial Analytics. In dieser Rolle baut er ein skalierendes SW-Geschäft für die Weidmüller Gruppe auf und führt Weidmüller von einem führenden Anbieter der elektrischen Verbindungstechnik zu einem Machine Learning Champion. Seine Vision: Die Anwendung von AI und ML zu vereinfachen und zu beschleunigen, so dass es gelingt mit diesen Technologien signifikante Wertschöpfungspotentiale in der Industrie in der Breite zu erschließen. Vor seiner aktuellen Tätigkeit war er als Industry Develo-pment Manager und als strategischer Produktmanager tätig.

10:10 Uhr: Kaffeepause und Besuch der Ausstellung
10:40 Uhr: Digital Twins - Model and Optimize the Reality with Graphs mehr
Today’s IoT solutions are device-centric; they are constrained to leverage context. In this session, I will show how digital twins of the reality can be built with the Open Sourced Digital Twin Definition Language, contextualized and fundamentally simplify IoT architectures and applications through a live execution environment.
Referent: Oliver Niedung | Microsoft

Oliver Niedung grew up in Hannover, Germany, had various object-oriented software development roles before and after finishing his degree in Medical Informatics at the University of Hildesheim. He had development and sales roles at Berner & Mattner and Visio, which was acquired by Microsoft in 1999. At Microsoft, Oliver managed the Embedded Server activities in EMEA with leading global OEMs until 2015. Since then, Oliver works with the most strategic Microsoft OEMs and partners in Europe on digital transformation and highly innovative IoT solutions.

11:20 Uhr: Die Edge und smarte Motoren: Dezentrale Automatisierungskonzepte ohne SPS mehr
Klassische Automatisierungskonzepte mit der SPS als Mittelpunkt haben oft eine geringe Skalierbarkeit und ein hohe Komplexität bei der Programmierung. Denkt man in dezentralen wiederverwendbaren Modulen, die ihre nötige Logik direkt in den Geräten mitbringen, über die EDGE miteinander Prozessdaten austauschen und über die Cloud verwaltet werden, erhält man eine skalierbare Kosten und Bauraum einsparende Lösung, die keine zentrale SPS mehr benötigt.

Dezentrale Architekturen folgen dem Ansatz, dass Entscheidungen direkt dort getroffen werden, wo die resultierenden Aktionen ausgeführt werden. Lediglich übergeordnete Logik wie Prozessdatenerfassung, Werkstückverfolgung oder Visualisierung wird von übergeordneten Systemen übernommen. Konkret bedeutet dies, dass alle zeitkritischen Anwendungen, die in Echtzeit ausgeführt werden müssen, direkt auf den Feldgeräten ausgeführt werden. Alle verbleibenden Anwendungen können von nicht echtzeitfähigen, Managementsystemen übernommen werden. Dieser Ansatz für sich bietet schon etliche Vorteile bei Kosten- und Bauraumeinsparung durch weniger aktive Komponenten und einfachere kleinere Logikbausteine.

Erweitert man den dezentralen Ansatz nun um die EDGE, die die nicht zeitkritischen Abläufe sowie die Schnittstelle zu anderen Systemen und zur Cloud übernimmt, entsteht ein zukunftsorientiertes offenes und skalierbares System. Innerhalb von diesem können die einzelnen Module und Komponenten über die EDGE mittels der existierenden Industrial Ethernet Feldbusse Daten austauschen. Gleichzeitig können die Geräte über Protokolle wie z.B. OPC UA Prozess- und Diagnosedaten zur Verfügung stellen. Die EDGE dient als Datendrehscheibe, die diese Daten verschiedenen Managementsystemen oder der Cloud zur weiteren Auswertung in Roh- oder aggregierter Form zur Verfügung stellt.

Der Vorteil liegt darin, dass die erste Datenvorverarbeitung nah am Punkt der Entstehung stattfindet und nachgelagerten Systemen nur die tatsächlich benötigten Daten übermittelt werden. In die Cloud können Daten von der EDGE aus dann beispielsweise via MQTT übertragen werden z.B. in Support Plattformen der Gerätehersteller oder der Maschinenbauer, die zum einen ortsunabhängigen Remote-Support oder ein Gerätemanagement ermöglichen, aber auch weiterführende Diagnosen wie die Kalkulation von Ausfallwahrscheinlichkeiten. Auf dieser Basis kann der Betreiber Wartungen und Lagerhaltung besser zu planen. Wichtig ist, dass es sich bei der EDGE um ein offenes System in der Hoheit des Betreibers handelt. Dadurch obliegt dem Betreiber die Kontrolle welche Daten wohin weitergegeben werden und es können verschiedene Hersteller mit ihren Applikationen und Services auf der EDGE aufsetzen ohne dass der Betreiber für jeden Hersteller eine eigene Infrastruktur bereitstellen muss.

Ein Praxisbeispiel sind intelligente programmierbare Servo-Motoren, die zu dezentralen Modulen kombiniert werden. Die Module kommunizieren über den Feldbus mit Software-Containern auf einem EDGE-Gateway. Darüber stellen sie zum einen eine Verbindung zu einer IIoT-Plattform her, die ein Management und Monitoring der Module ermöglicht und zum anderen stellen sie Daten für andere Systeme des Betreibers bereit, wie z.B. ERP oder MES.

Referent: Markus Weishaar | Dunkermotoren

Markus Weishaar hat 2012 ein Studium der Elektrotechnik (B. Eng.) an der Hochschule Ulm abgeschlossen und dieses 2017 berufsbegleitend durch einen Abschluss in Wirtschaftsingenieurwesen (M. Sc.) ergänzt. Nach insgesamt 11 Jahren im Verpackungsmaschinenbau in unterschiedlichen Software- & Leitungsfunktionen betreut Markus Weishaar bei Dunkermotoren seit Mai 2019 als Produktmanager die Themengebiete IIoT und Software.

12:05 Uhr
Gemeinsame Keynote ASE & i-edge: How a Cloud/Edge Paradigm is Disrupting the Automation Industry and Why Software is a Key Success Driver mehr
In the past decade, software-driven innovations, such as AI and machine learning, have revolutionized the information technology used in other areas of business and society. The result is an automation gap that creates barriers to unleashing the step-change productivity improvements that manufacturers aspire to achieve. Cloud/Edge computing as bridging technology enables true, enterprise ready, software
defined shopfloor solutions.

In this talk we discuss Six Key Ingredients of Shop-Floor Automation, or, how can manufacturers and automation providers use edge computing and cloud paradigms to enhance productivity on the shop floor?
Referent: Johannes Boyne | Boston Consulting Group

Johannes joined the BCG group four years ago. Since July 1, 2020, he is Associate Director, located in the Munich office. With BCG, and it’s subsidiary BCG Digital Ventures, he worked on multiple end-to-end software-driven, and Cloud/IoT related business builds and product definitions. Before BCG, Johannes held multiple senior software management and engineering positions. Johannes designed one of the first Edge-powered industrial robot setups. Therefore, BCG and AWS cooperated on a demo setup for a large manufacturer and demonstrated the results on the HMI in 2017. It was one of the first AWS Greengrass installations.

12:45 Uhr
Mittagspause und Besuch der Ausstellung


13:45 Uhr

Track 1 Nachmittags
13:45 Uhr: Research project AIfES: Embedded AI, Hierarchical Models and Grey-box Approaches mehr
AIfES is a machine learning framework where the algorithms are optimized for resource-limited hardware such as microcontrollers. The integration of problem-based prior knowledge and hierarchical structures allows small and efficient implementations.

Probably the most frequently used method in machine learning (ML) are the artificial neural networks (ANN). The use of deep neural networks (DNN) has led to groundbreaking successes in the recent past. However, the use of ANNs on resource-limited hardware such as microcontrollers (μC) faces hurdles and limitations. Current software frameworks for machine learning (ML) are optimized for the PC and uses Python. This allows easy implementation and fast training of DNNs. However, large DNNs can only be implemented on microcontrollers to a limited extent and there is no standardized method for porting them yet. Some solutions are already available, e.g. STMicroelectronics offers the STM32Cube.AI® ecosystem for its own μCs, where pre-trained ANNs can be imported. Google® offers with TensorFlow® Lite for μC also a possibility to port pre-trained neural networks. Current solutions focus on porting a pre-trained ANN and usually require a 32-bit platform.

With its own research project AIfES (Artificial Intelligence for Embedded Systems) Fraunhofer IMS investigates the application of artificial intelligence on resource-limited systems. AIfES is a machine learning framework that can be run on almost any hardware platform, ranging from an 8-bit μC to a PC. The software framework was developed in the programming language C for maximum compatibility. It is standalone but also compatible with other ML tools such as TensorFlow®. By importing the structure and weights, an ANN can be easily replicated. It started with the implementation of a freely configurable feedforward neural network (FNN), whereby all areas from activation functions to memory management were optimized for use on μC. These measures even allow the training of a neural network on an embedded system. In order to use ANNs on μC or DSPs without floating-point arithmetic, the possibility of using fixed-point arithmetic was successfully implemented and investigated.

For the implementation of AI on μC not only optimized algorithms are needed. Optimization already starts with the feature extraction. For example, hierarchical models were worked on to reduce the size of the network. Here several small ANNs replace a large DNN if possible. The IMS developed for example a complex gesture recognition, which can be used e.g. for menu navigation in wearables. Different gestures may be required in the different menu areas, so that different small ANNs can be trained, each of which can handle a small selected number of gestures. In this way a large DNN can be replaced, which recognizes all gestures.

Another topic is grey-box approaches where previous knowledge from the application is included in the feature extraction. DNNs and big data approaches follow a black-box strategy, whereby a lot of information is processed and fed into a large DNN. Grey-box approaches should help to present only the necessary features to the ANN and keep the number of inputs as low as possible.

Referent: Dr. Pierre Gembaczka | Fraunhofer IMS

Dr. Pierre Gembaczka is a scientific assistant since 2014. He studied Microtechnology and Medical Technology and held a Master's degree from the University of Applied Sciences in Gelsenkirchen. Afterward, he completed his doctorate at the Fraunhofer IMS in cooperation with the University of Duisburg Essen and obtained the academic degree of a doctor of engineering. From 2014 to 2017, he worked as a research assistant in the department Micro- and Nanosystems - Pressure Sensors at Fraunhofer IMS. Since 2018 he works in the embedded systems group at Fraunhofer IMS and researches embedded AI solutions for various applications. He is the primary developer of the AI software framework AIfES (Artificial Intelligence for Embedded Systems)

14:25 Uhr: Designing an AI Enabled Camera Device for the Edge mehr

Machine Learning (ML) und Künstliche Intelligenz (KI) bilden die technische Grundlage für die Optimierung der Produktion und die Umsetzung von datenbasierten Services. Die Anwendung von KI und ML muss aber vereinfacht werden, um in der Breite das Nutzenpotential zu erschließen.

Die Entwicklung von ML-Lösungen ist zu großen Teilen ein manueller, kreativer und komplexer Prozess. Allein bei der Modellentwicklung gibt es bis zu 1040 Möglichkeiten ML-Verfahren, Feature oder Hyperparameter miteinander zu kombinieren. Dadurch ist der Modellbildungsprozess nur durch eine kleine Gruppe von Experten, den Data Scientists, leistbar. Es gibt zwar erste Software-Werkzeuge, um Teile des Modellbildungsprozesses zu automatisieren und den Data Scientists einen erheblichen Teil ihrer Arbeit abzunehmen. Zur Anwendung ist allerdings tiefgreifendes Wissen im Bereich ML notwendig. Darüber hinaus muss der Data Scientist mit dem Applikationsexperten sprechen, um die gefundenen Zusammenhänge in den Daten ingenieurmäßig bezüglich des Maschinenverhaltens oder des Produktionsprozesses zu interpretieren. Das Domänenwissen hat also einen ganz erheblichen Einfluss auf die Güte einer ML-Lösung.

Wenn das Domänenwissen also entscheidend ist und der Data Scientist ohnehin mit dem Applikationsexperten die gefunden Zusammenhänge in den Daten diskutieren muss, liegt es auf der Hand, Maschinen- und Prozessexperten gleich von vorneherein die Möglichkeit zu geben ML-Lösungen eigenständig zu erzeugen. Und zwar ohne selbst Data Scientist zu sein und ohne über Expertenwissen im Bereich KI oder ML zu verfügen.Der Beitrag stellt ein Konzept vor, das Applikationsexperten befähigt eigenständig auf Basis ihres Domänenwissens ML-Lösungen zu kreieren. Abgerundet wird der Beitrag durch ein Anwendungsbeispiel aus dem Maschinen- Anlagenbau, das demonstriert wie Domänenexperten in weniger als einer Stunde erste ML-Lösungen erzeugen können.

Referent: Dieter Kiermaier | Arrow

Dieter Kiermaier works with Arrow as a Technical Solution Architect. He started his career as a developer for embedded Linux systems. For six years, he is active in the area of electronic component distribution, working as Technology Field Application Engineer for High-End processors and System-on-modules. At the beginning of this year, he moved into the role of a Technical Solution Architect for Embedded Systems and Cloud-based solutions at eInfochips (an Arrow Company).

15:05 Uhr: Kaffeepause und Besuch der Ausstellung
15:35 Uhr: SMART NEUR CHIP – Deep-Learning computing on the edge mehr
AVI has implemented a Deep-Learning hard- and software toolbox to realize deep-learning on the edge applications based on deep-learning accelerator software tools and a unique chip architecture to process data with high speed and very low power consumption.

Deep-Learning using convolutional neural networks (CNN) brings a huge improvement in accuracy and reliability for various tasks in automation and incident detection. There are many different concepts, like single shot detectors, that have been published for detecting objects in images or video streams. However, CNNs suffer from disadvantages regarding the deployment on embedded platforms such as re-configurable hardware like field programmable gate arrays (FPGAs). Due to the high computational intensity, memory requirements and arithmetic conditions, a variety of strategies for running CNNs on FPGAs or ASICS have been developed.

In addition to that functional safety and self awareness are becoming more and more important because, the user wants to trust on the information of the sensor system. This so called Safety-of-the-intended-function (SOTIF) is one of the core issues AVI is the second main requirement that should be solved with the FPGA IP-Core developed by AVI. Safety and sensor data in the meaning of guaranteed latency threshold is one of the specialities of AVI coming from the Camera-Monitor-Systems that are one of the major products in the product line RAILEYE. This know-how was applied to the Machine-Learning methods and algorithm in the on the edge solutions from AVI.
Being aware of this requirement we took the decision to develop our own software tools and IP-Core architecture. To get a prove of concept of this new technology AVI decided to realize a concrete application, which is the truck turn assistant “CAREYE SAFETY ANGLE” that is already on the market and has the required type approvals from the Kraftfahrzeug-Bundesamt (KBA). This product includes a deep-learning based object detector running on a less power consuming and fan less controller box.

The presented methods show our best practice approaches for example a TinyYOLOv3 detector network on a XILINX Artix-7 FPGA using our techniques like fusion of batch normalization, filter pruning and post training network quantization. Results will be demonstrated to compare the precision of the CNN after the optimization steps with YOLOv3 and other networks and in addition the performance in the manner of frames per Watt and frames per second on the XILINX ARTIC-7. An outlook will show the possibilities of the future applications that will be solved using this unique technology.
Referent: Johannes Traxler | AVI-Systems

Johannes Traxler attended the HTBLuVA for telecommunications and computer science in St. Pölten. Thereupon he studied technical physics and astronomy at the TU in Vienna. During his studies, he concentrated on “Artificial Intelligence“-research and development with machine learning, based on early recognition of tumor cells. In between 1999 and 2002, he worked as director of research and development at “ArtiBrain Forschungs- und Entwicklungs GmbH” to implement machine learning-based tunnel safety and incident detection systems for the first time. In 2013, he founded the “AVI-Systems GmbH,” where he started with deep-learning-based applications for manufacturing inline inspection systems in 2014. Since this period, he acted as CEO, invented new approaches for real-time video transmission with his team, and collaborated with different research organizations to develop deep-learning applications. Mr. Traxler is head of the task force “Videodetektion in VBA, “which is a part of the German Road and Transportation Research association and makes significant contributions to the development of technical standards and guidelines for Germany.

16:15 Uhr: 5G/AI/Edge Computing - Data management Challenges, Technologies and Architectures mehr

The worldwide shift towards digitization has placed immense pressure on connectivity. Over 20 billion Internet of Things (IoT) connected devices have already been installed, and the number is estimated to grow to 75.44 billion by 2025. IoT devices benefit all industry sectors, promoting higher levels of convenience, productivity, and communication. To keep up with the vast amounts of data we need to utilize the processing power on the IoT devices and make the IoT edge intelligent.

Edge computing solves two problems by merging them together into one solution. On the one hand, the constraint on cloud data centers to handle increasing amounts of data is about to reach a breaking point. On the other hand, Artificial Intelligence (AI) systems consume information in such a speed that there doesn’t seem to be enough. Edge databases enables applications to bring Machine Learning (ML) models to the edge .

Powered by real-time databases and AI, the intelligent edge can provide real-time insights for the improvement of many industries. Last-mile delivery is made faster and more efficient with features such as smart tracking and real-time route navigation. Intelligent systems can provide stylistic insights for customers as they browse the shop, and facial recognition software can be used for fraud detection. Edge intelligence can provide predictive risk assessments for healthcare professionals and promote better health awareness. There’s no limit as to the benefits the intelligent edge can provide, not even the cloud.

Referent: David Nguyen | Raima

David Nguyen is the Head of Engineering & QA at Raima. He started his career as a QA Software Engineer with work on the creation and maintenance of a fully automated testing QA framework and a complete daily build system. As the Director of Engineering he is leading the development in modernizing the database product line with a feature focus on AI, autonomous driving and edge IoT while ensuring the customer experience is as straightforward and easy to use as possible. David holds a B.S. in both Mathematics and Computer Science from the University of Washington.


13:45 Uhr

13:45 Uhr: Condition Monitoring Lösungen für Anlagen unter Betrachtung geeigneter Datenübertragungstechnologien mehr
Der Vortrag umfasst die folgenden Aspekte:

  • Internetzugangstechnologien und Funktechnologien für Condition Monitoring-Lösungen: Wann eignet sich WLAN, Lora, Sigfox oder Mobilfunk am besten; je nach Anwendung betrachtet;
  • Prozessüberwachung und -automatisierung mit Sensorlösungen
  • Herausforderungen von Funk
  • Anwendungsbeispiel: Motorenüberwachung bei Murrelektronik
Referent: Thomas Schildknecht | Schildknecht AG

Thomas Schildknecht ist Gründer und Vorstand der Schildknecht AG, einem Spezialisten für Industrie-Datenfunk und das industrielle Internet of Things. Das Unternehmen entwickelte u.a. ein zukunftsweisendes Funksystem für den industriellen Einsatz, um sichere und stabile Funkübertragungen möglich zu machen. Das Unternehmen ist zudem Systemanbieter im Bereich Fernüberwachung und Fernwartung, Telemetrie sowie M2M.

Track 2 Nachmittags
14:25 Uhr: Reproducible Data Science with a Narrative Focus mehr
Building and optimizing a data analysis pipeline is a core activity for a company with a data-centric business model. Other companies - especially small and medium-sized enterprises – struggle by setting up a similar data analysis workflow. Reasons among others are that many of the tools employed by data-centric companies are expensive to buy or require constant maintenance by qualified personnel.

However, without a fixed and reproducible data analysis workflow, even medium-sized data sets can easily be overwhelming, and one can get lost in its details, missing obvious patterns.

In his talk, Nikolai Hlubek will present an example workflow that starts from scratch by the planning of laboratory experiments and ends with a trained machine-learning model on a microcontroller. This workflow will use freely available tools. The explorative data analysis is done in python and jupyter notebooks. The model building is done with keras. The optimization of a keras model for the microcontroller is done with X-CUBE-AI from ST.
Referent: Dr. rer. nat. Nikolai Hlubek | Bürkert Fluid Control Systems

Nikolai Hlubek is a Research and Development Engineer at Bürkert Fluid Control Systems. He holds a Ph.D. in physics and has been doing data science for more than ten years. Nikolai is the winner of 2017 What The Data!? Hackathon.

15:05 Uhr: Kaffeepause und Besuch der Ausstellung
15:35 Uhr: Functional Safety with AI mehr
Concerning functional safety, to integrate AI as a critical part of solutions, different scopes are currently in work. Because of its importance in the development of systems concerning functional safety, one scope is about the “development process and AI. It also seems to be the more simple part, because, in this field, there is much experience on how to design “proper” processes to develop functionally safe systems.

It is more challenging to define techniques and measures to follow when AI is to be designed as a critical part of functional safety. Fundamental research is needed to give the required confidence in such solutions integrating AI in functionally safe systems.

The presentation is about ongoing activities in the areas of “process design” and “techniques and measures.” It should result in starting a discussion if these activities will result in more confidence in solutions with AI as a critical part of functional safety.
Referent: Frank Poignée | infoteam SET

Frank Poignée is Chief Engineer and Safety Consultant at infoteam SET. He is a Project Manager for industrial automation, medical engineering, and life science. Frank is also the Product Owner of the infoteam Functional Safety Management Process iFSM. Frank is ASQF/ISTQB Certified Tester and TÜV Functional Safety Engineer for Hard- and Software.

16:15 Uhr: Edge Computing Meets Smart Services
Referent: Olaf Wilmsmeier | peraMIC
Referent: Dr. Robert Rae | PerFact Innovation
17:00 Uhr
Ende der Veranstaltung
*Programmänderungen vorbehalten

Business Sponsor