Conference Agenda*

The program of the i-edge conference and the ASE Congress at a glance: Download PDF

Tuesday, November 17, 2020

8:00 AM
Registration
8:45 AM
Welcome
9:00 AM
Edge Security more
In 2017, Microsoft introduced a new standard for IoT security by releasing the white paper, “The seven properties of highly secured devices.” The paper argued, based on an analysis of best-inclass devices, that seven properties must be present on every standalone device that connects to the internet.

Some of these properties, like the presence of a hardware-based root of trust or compartmentalization, require certain silicon features. Others, like defense in depth, require a certain software architecture, as well as the presence of other properties, like a hardware-based root of trust. Finally, other properties, such as renewable security, certificate-based authentication, and failure reporting, require not only silicon features and certain software architecture choices within the operating system, but also deep integration with cloud services. Piecing these critical pieces of infrastructure together is difficult and prone to errors.

Ensuring that a device meets these properties could therefore increase its cost. This led us to believe that the seven properties also introduced an opportunity for security-minded companies to implement these properties as a platform, freeing device manufacturers to focus on product features, rather than security. Azure Sphere is Microsoft’s entry into the market with a seven-properties-compliant, end-to-end, product offering.


Speaker: Jürgen Schwertl | Microsoft

Jürgen joined Microsoft in 1990. In Windows Program Management he helped shape Windows releases over almost a quarter-century. Moving into an Architect role in Microsoft Services, he began connecting "things" to the cloud, planning and delivering industrial IoT Solutions. In the One Microsoft IoT & MR team he is now enabling partners to build innovative IoT solutions secured from the chip to the cloud.

9:45 AM
ML-based Sensors Change the Edge more
The use of machine learning algorithms directly in the sensor or in the direct vicinity of a sensor system offers a lot of new exciting possibilities, especially in the area of real-time data analysis for pattern recognition. On the other hand, this approach also creates some new challenges that should not be underestimated in their overall complexity.

With "limited hardware resources" and "remote update capability of the ML models", the session addresses two of these topics. Using a virtual condition monitoring sensor as an example, the talk shows the audience how to successfully solve the challenges.
Speaker: Klaus-Dieter Walter | SSV

Klaus-Dieter Walter is CEO of SSV Software Systems GmbH in Hanover and is well known through lectures at international events as well as articles in technical journals. He has published four professional books on the topic of embedded systems. In 2007 Mr. Walter co-founded the M2M Alliance e.V., and was on the board of directors for many years. He is also a board member of the industry forum VHPready to create a communication standard for virtual power plants. Mr. Walter has been a member of the Internet of Things expert group within the Intelligent Network focus group of the Digital Summit of the German Federal Government since 2012.

10:30 AM
Performance Analysis and Bottlenecks of AI on the Edge more
Artificial intelligence has been succesfully deployed to numerous types of platforms. The performance and architecture of devices used in the field is often different from the platforms in the data centers. There are also vast differences in system requirements that drive the overall system design.

What can we do to get necessary performance, but keep the accuracy and not deplete the battery in a matter of seconds? We will discuss a few, selected architectures with models and several techniques of optimizing algorithms, mostly focused on deep learning.

Speaker: Lukasz Grzymkowski (en) | Arrow

Lukasz Grzymkowski is working with Arrow as Technical Lead for Embedded Software. He started his career as a software engineer for Intel in the Data Center Group and joined Arrow in early 2018. In parallel, he is currently working on his Ph.D. degree focused on artificial intelligence and control theory.

11:15 AM
Guiding AI to the Application Edge more

As AI methods have matured in datacenters and on industrial PCs, we see people eager to apply them on embedded devices close to the application. The step from datacenters into constrained devices brings both challenges and opportunities.

In this talk we will consider the different mindsets needed for both environments and highlight challenges, tools and solutions to ease the transition.

Speaker: Dr. Nicolas Lehment | NXP

Nicolas Lehment is a systems architect at NXP’s Industrial Competency Center, where he advises on strategic topics such as ML/AI, connectivity and safety for industrial automation. Before joining NXP, he designed cutting-edge computer vision and robotics systems for ABB and Smartray. He’s collaborated on research papers for topics ranging from ML-driven video classification over human pose tracking to collaborative robotics. This academic work earned him a doctoral degree at the Technical University of Munich.

12:00 PM
Joint Keynote ASE & i-edge: Machine learning on-the-edge: Anything but state-of-the-art
Speaker: Prof. Dr. Oliver Niggemann | Universität der Bundeswehr Hamburg

Oliver Niggemann studied computer science at the University of Paderborn, where he received his doctorate in 2001 with the topic "Visual Data Mining of Graph-Based Data." He then worked as a software developer for Acterna in the telecommunications industry. Until 2008 he was a lead product manager at dSPACE. Niggemann was active in the AUTOSAR committee and until 2008 was chairman of the advisory board of the s-lab of the University of Paderborn. In 2008, Niggemann accepted the call to the newly established professorship for Computer Engineering at the Ostwestfalen-Lippe University of Applied Sciences. He headed the laboratory for "Artificial Intelligence in Automation." From 2008 to 2019, he was a member of the board of the Institute for Industrial Information Technologies (inIT). Until 2019, Niggemann was also deputy director of the Fraunhofer IOSB-INA Institute for Industrial Automation. On April 1, 2019, Niggemann took over the university professorship "Information Technology in Mechanical Engineering" at the Helmut Schmidt University of the Bundeswehr in Hamburg. There he is researching at the Institute for Automation Technology in Artificial Intelligence and Machine Learning for Cyber-Physical Systems.

12:45 PM
Lunch break
1:45 PM
Enable your Project with AI – An Approach to Support System Design and Architecting with AI-specific Properties more
Many companies struggle with the decision of how to engage artificial intelligence for their products. During the speech, Robin Roitsch will introduce a systematic approach to check specific AI-properties and their current status within a project. The method provides an overview of particular properties that need to be addressed during the design and development of AI applications. It also shows how to engage these topics in case it was not under consideration yet. Furthermore, it assists in evaluating these solutions towards stakeholders requirements.

Research and development in the area of artificial intelligence started in the 1950s. However, the actual integration and deployment for small- and medium-sized embedded systems companies is still a widely unestablished process. The main reasons for this are: The lack of an understanding of AI technology and AI-specific properties itself leads to the lack of knowledge about potential application areas and the uncertainty of whether a migration towards an AI-based solution will benefit.

The main problem in applying AI in the industry is a systematic approach to mapping the concerns of stakeholders to potential AI-enabled solutions, which, from the architectural point of view, also discusses consequences in terms of benefits and drawbacks.

Robin's work aims to support decision-makers who want to adopt AI's potential for their projects. He created an approach to enhance standard requirements engineering techniques such as the Adequacy Check with AI-specific property checks. This check allows us to grow awareness and confidence about specific processes and steps that need to be considered throughout an AI-based development. Furthermore, it assists in identifying shortcomings of the current status of your architecture and provide hints on how to engage it. The approach does not necessarily require an input but can straightforwardly be used as a support tool for the requirements elicitation process when it comes to AI-specific needs.
Speaker: Robin Roitsch | NVIDIA

Working as Business Development Manager in the domain of embedded and industrial systems, Robin's daily challenge is to update customers with the latest upcoming technologies and trends - especially in reasonably new technology like artificial intelligence, which is going beyond traditional HW/SW approaches. As an NVIDIA employee and former Technical Feld Application Engineer at Arrow Electronics, Robin engages many different customers with a wide range of various projects and use cases. When it comes to AI, his primary task is to identify the current status of the customers' project and to provide support in case of fundamental questions, deep-dive technical assistance, and general shortcomings throughout the complete project lifetime.

2:30 PM
AI at the Edge – Enabling Time Critical Video Analytics more
Advances in computing technology and AI algorithms have made it possible to perform Edge Video Analysis (EVA) on location in real-time. Discover the solutions which deliver the right compute for time critical use cases in industries ranging from traffic monitoring to medical diagnosis and security surveillance.

Taking AI enabled Video Analysis to the Edge in order to support real-time data delivery and decision making. It requires extremely powerful microprocessor units (MPUs) combined with graphics processing units (GPUs) – which can accelerate compute-intensive applications by spreading computing workloads over multiple cores. Hear more about powerful Intel microprocessors which are dramatically boosted by the addition of NVIDIA GPUs; and deep learning platforms based on NVIDIA’s Jetson family provide a quick start for autonomous machine development.
Speaker: Marco Krause | Adlink

Marco Krause is Global Account Director at Adlink. He is responsible for global TIER1,2 accounts and heading the CEE team. Marco has 15+ years of experience in the Embedded, IPC, Distribution & IT environment. His special interest is in AI/KI related topics and applications (i.e. robotics, AGV´s, autonomous cars).

3:15 PM
Integrating Connectivity, Computing, and Peripheral Functions at the IoT Edge more
This talk examines the challenges that embedded system developers face in implementing edge computing functions using existing compute/control components, and proposes a new platform approach to accelerate development and enhance edge computing systems‘ performance.

The instinct of embedded system developers when they start developing a new edge computing device is to base it on the type of computing component they are most familiar with – a microcontroller, an FPGA, or an applications processor. In terms of raw processing horsepower, products are available in all of these categories that can handle the computing workload of machine learning or other AI applications. But the implementation of edge computing designs throws up different, and more difficult, problems than conventional MCU- or processor-based architectures face. And the root of these difficulties lies in the need to seamlessly combine multiple functions in a single system.

These functions include:
•    Connectivity – getting products to work seamlessly in a field of multiple wireless technologies
•    Security - compliance with emerging privacy and security requirements
•    Human-machine interface - aesthetically attractive industrial design
•    The user experience - making technology plug-and-play, while delivering behind-the-scenes software updates

At the same time, manufacturers must also consider other factors beyond the device itself, including:
•    Secure, scalable device management with easy on-boarding supporting major platforms or in-house servers
•    Integration - making disparate technologies work together seamlessly
•    Cloud support - secure, scalable device management with easy on-boarding
•    Monetization - enhanced profitability through reduced support costs while providing for secure lifecycle management
•    Low-power operation – minimizing heat dissipation while addressing environmental issues

Individual components on their own fail to provide an ecosystem to support rapid development of a system which includes all necessary functional elements, while also meeting the marketing specifications for the user experience, monetization and so on.

In this session, we will describe how a new platform approach can simplify the development of IoT edge devices, helping embedded product manufacturers to get to market more quickly, while enhancing the performance and strengthening the security of connected devices at the edge of the network. We will also describe the essential components of such a platform, including pre-certified, low-power solutions for connectivity, security, device management and middleware. The speaker will describe these features drawing references to Cypress‘ IoT-AdvantEdge system, a roadmap for IoT edge device development that includes secure compute and connect solutions, IoT development kits, improved APIs, tools and support, partner certifications, online IoT community resources, and investments in standards-based security initiatives to help unify the growing market.
Speaker: Rob Conant (en) | Infineon

Rob Conant has a long history in the Internet of Things and connected products. Rob was a pioneer in the Internet of Things as a co-founder of Dust Networks in 2002 where he ran sales and marketing for the company and set the product direction as the company enabled a new market for wireless connected products in the industrial markets. After Dust, Rob joined Trilliant to bring the benefit of connected products to the energy industry, where he delivered smart meters and energy management to millions of customers across the US, Europe, and Asia. Rob first ran the engineering team to ensure the company had market-leading products, then took the position of Chief Marketing Officer to drive the company's growth. Rob's deep technical background and business orientation (ran sales and marketing at multiple companies) gives him a unique perspective on the interactions between business models, technology, and consumers. Rob has a PhD in Electrical Engineering and a BS in Mechanical Engineering from UC Berkeley.

4:00 PM
MIOTY – The New LPWAN Standard for Sub-1 GHz Communication more
Parameters such as long-range and low-power connected devices are important for communication. In order to be able to communicate over long distances, technology is optimized for long-range radio-frequency (RF) communication. The Massive Internet of Things (MIOTY) protocol enables the division of data packets into smaller subpackets: They spend less time in the air and therefore the risk of collisions and of data loss decreases.

While communication speed for wireless technologies has been a priority for decades, the focus has started to shift to other parameters such as long-range and low-power connected devices. Networks with this focus are often referred to as low-power wide-area networks (LPWAN); Sensors in LPWAN networks communicate infrequently and can sleep from minutes to hours between every transmission. In such applications, high data throughput is not as important as being able to communicate over long distances, which is why the technology is optimized for long-range radio-frequency (RF) communication.

The Massive Internet of Things (MIOTY) protocol enables the division of data packets into smaller subpackets, causing them to spend less time in the air and therefore decreasing the risk of collisions and of data loss.The MIOTY solution offers a star network for low-power end nodes as well as a gateway solution for cloud connectivity with a complete long-range and low-power solution for worldwide Sub-1 GHz communication. The goal of the MIOTY Alliance, the governing body of MIOTY, is to enable the most accessible, robust and efficient connectivity solution on the market. The MIOTY protocol operates in license-free bands around the world, and there are no costs involved in using the radio spectrum, unlike narrowband IoT solutions.

The physical layer (PHY) and link layer of MIOTY technology are based on a publicly available document: the TS 103 357 public technical standard (TS) from the European Telecommunications Standards Institute (ETSI). Eliminating the risk of vendor lock-in, MIOTY has already been tested with three independent silicon providers, including Texas Instruments (TI) using the CC1310 microcontroller (MCU). As of today, MIOTY offers a private network but the expectation is that third parties will also offer a network solution as a service.

The strongest advantage with MIOTY is the TSMA method, splitting up the data into smaller equally sized subpackets that are distributed over the time and frequency domains. This method creates a technology that is less susceptible to interference while also being friendlier to other radio systems. Because the packets are sent in smaller radio bursts, they spend less time in air, resulting in lower power consumption and a longer battery life.

MIOTY can help overcome performance degradation in high-node-count networks and reaching remote sensors. Using MIOTY is suitable for a wide range of battery operated applications that requires high density of sensors and small data volumes.
Speaker: Elin Wollert (en) | Texas Instruments

Elin Wollert is an applications engineer at Texas Instruments in Norway. TI is a founding member of the MIOTY Alliance, and in Elin's role as project manager for MIOTY at TI, she is pushing the MIOTY technology forward.

4:45 PM
Focus on the Edge more
Building a secure and robust embedded Edge device requires a strong software foundation.   Microsoft Azure IoT Edge integrated with Mentor Embedded Linux provides that foundation and lets developers “Focus on the Edge”.

There is lot of enthusiasm in adopting and implementing intelligent Edge enabled embedded devices, but once the ideas leave the whiteboard and get to the implementation stage the developers face numerous challenges that hinder their progress. Instead of focusing on Edge applications and building great experiences for their customers the developers end up spending valuable time and energy on basic enablement issues such as enabling Linux on their device, board bring up issues, and generally having all of the foundational software enabled and tested before they can actually start developing the Edge applications. Once implemented, these embedded Edge devices end up in diverse environments where they need to securely operate for a long time, which causes a continuous distraction for the developers to invest in securing and upgrading software on the deployed devices instead of building their next great Edge application.

In this session we describe how embedded software solutions from Mentor help developers “Focus on the Edge” by providing a ready-to-use Edge environment that includes Microsoft Azure IoT Edge integrated with multiple flavors of embedded Linux from Mentor. We discuss the native integration and technical architecture of the joint solution. The integrated solution comes enabled with critical services to securely manage devices remotely e.g. firmware updates and device diagnostics/monitoring. This helps users to get started quickly and get to market faster. The solution is backed by long term support from Mentor so that the users do not have to continually worry about software security issues or component upgrades after the devices are deployed in the field.


Speaker: Muhammad Shafique | Mentor, a Siemens Business

Muhammad Shafique is Product Manager at Embedded Platform Solutions division in Mentor, a Siemens business. He has over 17 years of experience in the embedded software and his experience spans from deeply embedded software to all the way up to the higher layers of embedded software stack and device management in cloud (e.g. Internet of Things, edge stacks, industrial connectivity, etc.). In his current role, he is responsible for IoT/Cloud integration strategy for embedded Linux as well as real-time operating system products from Mentor Embedded Platform Solutions division. He holds a bachelor’s degree in electrical engineering.

5:30 PM
End of the first Conference Day

Wednesday, November 18, 2020

8:00 AM
Registration
8:45 AM
Welcome

TRACK 1 IN THE MORNING

9:00 AM

Track 1 in the morning
09:00 a.m.: DevOPs for Machine Learning at the Edge more
If you work with code at all, you’ve probably heard of DevOps. An approach to application lifecycle management, it employs an (ideally, fully automated) continuous integration / continuous deployment (CI/CD) pipeline to streamline the process of building, testing, and deploying new code into a production environment. 

I am going to cover in my session the DevOps approach for Machine Learning scenarios. After you have the model you want, you’ll need to address model packaging, which involves capturing the dependencies required for the model to run in its target inferencing environment (at the edge this is typically a device). Containerization is the obvious choice; containers are the de facto unit of execution today across both the cloud and intelligent edge. You’ll also want to consider model formats that are agnostic of training and serving fabric, which is where reusable formats such as Open Neural Network Exchange (ONNX) can be useful.
Speaker: Veronika Zellner | Microsoft

Veronika Zellner is an Architect for Data & AI at Microsoft and passionate about all data-related topics. Especially Machine Learning in the cloud and on the edge. She has several years of experience in the Business Intelligence and IT consulting. Currently, Veronika is working at Microsoft in Munich, Germany.

09:45 am: Running AI Application on Limited-Resource Hardware more
AI application requires a lot of computation power and memory footprint. That is why, it is not new that, to deploying AI at the edge, it is necessary to have high-end processors, FPGAs or GPUs. Until recently, low-end devices receive a boost in the performance, which enables them to run an AI application with limited features. This talk will give the audience an overview of the development states, software aspects, hardware aspects that enable them to run an AI application.

Running AI at the edge allows the user to run AI applications locally on the hardware without any intervention from the cloud. The benefits are fast response time, less bandwidth consumption, and the devices can be deployed in a harsh environment, where the network facility may not be available. AI application requires a lot of computational power and memory footprint, which is available only on high-end devices like MPUs, GPUs, VPUs, and FPGAs. The low-end devices, mostly MCUs, are defined as the devices running bare metal (no operating system), having the clock frequency under 1GHz, and having limited memory. Consequently, deploying AI applications on those devices is out of the question. 
Until recently, the low-end devices' performance has received a performance boost-up, which, some up them, can reach the clock frequency of 600MHz. Therefore, the idea of bringing AI capability to those low-end devices start to root up.

During this speak, we will discuss multiple approaches to provide a big picture of what is currently developed, what is the trend and where the limitation is. We will look at the new Arm Cortex-M55 processor, which is a new AI-capable microcontroller. We examine the aspects of that MCU to see how it helps to boost up the AI performance. Then, we will look at the software adaptions (TensorFlow Lite, CMSIS NN) for microcontrollers. Finally, we will look at the implementations coming from the silicon vendors.
Speaker: Quang Hai Nguyen | Arrow

Quang Hai Nguyen is working as Technology Field Application Engineer at Arrow Central Europe GmbH. He joined Arrow in 2015 as a Junior Engineer in the Graduate program. After the Graduate program, Quang Hai worked as an Application Engineer focusing on microcontrollers. Since 2019, he has been working as a Technology Field Application Engineer for high-end processors and security in embedded systems. At the beginning of this year, he has taken up additional responsibility supporting customers with AI at the edge technology.

10:30 a.m.: How 5G/TSN/Edge will Shape the Future of Industrial Networking more
Industrial communication systems evolve towards standardized Time-Sensitive Networking (TSN). However, the flexibilization of manufacturing processes often require independency of wired connectivity. The Edge Computing paradigm and a TSN over 5G technology stack are interesting approaches to address this challenge.

In industrial contexts, physics, domain requirements and regulatory frameworks pose some communication challenges in terms of dependability, response times and data protection. As a result, a number of approaches were developed using Operational Technology (OT). Within the last 70 years we have seen the evolution from current-loops, over field buses and industrial ethernet systems towards Time-Sensitive Networking (TSN). However, the flexibilization within manufacturing processes, such as the use of Automatic Guided Vehicles (AGV), often oppose the use of cable-based communication. Applying the new distributed Cloud Computing paradigm (Edge Computing) allows to reduce communication demand and to increase autonomy. Yet, reliable, low-latency and deterministic wireless communication is needed, but its standardization is still in its infancy. Upcoming 3GPP specifications try to fill this gap (TSN over 5G) and to experimentally validate and demonstrate the features of these concepts, we are implementing a first research prototype of a TSN-enabled Standalone 5G Core for industrial Campus Networks. Together with the adoption of the Edge Computing paradigm and a management layer for such an infrastructure, we're thereby moving towards fully configurable, flexible, software-based communication infrastructures. The aim of this talk is to raise curiosity about the application possibilities of these technologies and to facilitate further discussions.
Speaker: Dr.-Ing. Alexander Willner | Fraunhofer FOKUS

Dr.-Ing. Alexander Willner is head of the Industrial Internet of Things (IIoT) Center at the Fraunhofer Institute for Open Communication Systems (FOKUS) and head of the IIoT research group within the chair of Next Generation Networks (AV) at the Technical University Berlin (TUB). In collaboration with the Berlin Center of Digital Transformation (BCDT), he works with his groups in applying standard-based Internet of Things (IoT) technologies to industrial domains, such as Industry 4.0. With a focus on moving towards the realization of software-based industrial communication infrastructures, the most important research areas include industrial real-time networks (e.g. TSN/5G), middleware systems (e.g. OPC UA), distributed AI (e.g. via Digital Twins) and distributed Cloud Computing (e.g. Edge Computing) including management and orchestration.

11:15 a.m.: Ethernet to the Edge in Industrial Systems more
Great insights and information resides within edge sensors and actuators on the factory floor or process control environment. Connectivity is the key to accessing and actioning these insights. This talk explores the potential connectivity solutions that will bring the intelligent edge to reality.

The benefits presented by accessing intelligence at the edge of industrial systems is clear, increased insight, better analytics capabilities, informed decisions, all leading to increased productivity. The network of digitally connected systems, machines, edge sensors & actuators, sharing information is central to the connected factory vision. To realize this ambition of highly connected, intelligent, and flexible manufacturing, new industrial connectivity solutions are required that enable connectivity with edge devices. There is a requirement for higher data bandwidth, more extended reach cabling, IP addressability, and increased power at edge devices.  Coupled with this challenge is the strain exerted on the existing industrial networks due to increased traffic flows from the myriad of potentially connected edge devices.  Today's networks have limited potential for expansion, and new technologies and techniques are required to meet the demands of our automation environments.

The need for seamless connectivity from every sensor or actuator, even those in remote locations, dictates a change in the industrial network and its associated control systems. This presentation discusses the transition from existing field bus technologies to the new 10BASE-T1L Ethernet technology (IEEE standard 802.3cg-2019 / 10BASE-T1L) within the process automation environment. It will outline Ethernet's use over single twisted pair cabling of up to 1km in length while also adhering to the intrinsically safe, Zone 0 requirement of specific applications. This talk will also explore the opportunities presented by enterprise-wide connectivity, while outlining the challenges on the horizon in connecting and leveraging edge intelligence. Connectivity will unleash the real power of edge intelligence, and this presentation will explore how we make this a reality.
Speaker: Fiona Treacy (en) | Analog Devices

Fiona Treacy is a strategic marketing manager for Process Control and Automation focused on Industrial Connectivity at Analog Devices. Before this role, Fiona led the Marketing effort for MeasureWare and other Precision Instrumentation initiatives. She also held positions in Application Engineering and Test Development. Fiona holds a BSc. in Applied Physics and an MBA from the University of Limerick.

TRACK 2 IN THE MORNING

9:00 AM

Track 2 in the morning
09:00 a.m.: Concepts for Solving the IoT Puzzle more

Throughout this paper, we will talk about the what and how that makes IoT solutions complex; it’s not as simple as taking a puzzle you already have completed and adding a few additional pieces to expand the puzzle’s picture. The ability to capitalize on the already-established edge pieces, and then customize and integrate, will make the difference.


Through IoT solutions, machines are getting smarter, gaining context about where they are and what is around them to react. Essentially, we are making machines more human - connecting them to their

human counterparts so that together, they can do more than ever before. When those intelligent machines are integrated into the enterprise, IoT allows you to remove tasks, decrease complexity and confusion for the

end-user, and drive accurate, data-driven decision making.


The collection and analysis of data is imperative to uncovering value with IoT - this may be the focus of your product or only an add-on that enhances the core functionality that your users expect. Either way, you will need to thoroughly think through each component that gives your device the “Internet of Things” labeldata collection, hardware connectivity, communication protocol, and device-level security measures.

Speaker: Ralf Pühler | Kuda

Ralf Pühler is President/CEO Europe of KUDA llc, and a business development as well as an operations professional. Understanding customer needs and priorities lead to continuous customer dialogue from the initial idea until market breakthrough incl. quantifying the value and impact of ideas by testing them on the market. Ralf has been connecting things to the Internet for years and has helped customers transform from traditional into connected processes, to optimize cost, eclipse the competition, and create new revenue. He believes in success through providing industry-relevant concepts for value-adding IoT solutions.

09:45 a.m.: AutoML – A Game Changer for Scaling ML in Production more
It is presented how the application of ML in industry can be democratized: Domain experts are enabled to create ML solutions without expert knowledge in ML. This is possible by combining AutoML with the process and machine knowledge of domain experts.

Speaker: Tobias Gaukstern | Weidmüller

Tobias Gaukstern führt bei Weidmüller die Business Unit Industrial Analytics. In dieser Rolle baut er ein skalierendes SW-Geschäft für die Weidmüller Gruppe auf und führt Weidmüller von einem führenden Anbieter der elektrischen Verbindungstechnik zu einem Machine Learning Champion. Seine Vision: Die Anwendung von AI und ML zu vereinfachen und zu beschleunigen, so dass es gelingt mit diesen Technologien signifikante Wertschöpfungspotentiale in der Industrie in der Breite zu erschließen. Vor seiner aktuellen Tätigkeit war er als Industry Develo-pment Manager und als strategischer Produktmanager tätig.

10:30 a.m.: Digital Twins - Model and Optimize the Reality with Graphs more
Today’s IoT solutions are device-centric; they are constrained to leverage context. In this session, I will show how digital twins of the reality can be built with the Open Sourced Digital Twin Definition Language, contextualized and fundamentally simplify IoT architectures and applications through a live execution environment.
Speaker: Oliver Niedung | Microsoft

Oliver Niedung grew up in Hannover, Germany, had various object-oriented software development roles before and after finishing his degree in Medical Informatics at the University of Hildesheim. He had development and sales roles at Berner & Mattner and Visio, which was acquired by Microsoft in 1999. At Microsoft, Oliver managed the Embedded Server activities in EMEA with leading global OEMs until 2015. Since then, Oliver works with the most strategic Microsoft OEMs and partners in Europe on digital transformation and highly innovative IoT solutions.

11:15 a.m.: The Edge and Smart Motors: Decentralized Automation Concepts Without PLC more
Classical automation concepts with the PLC as the center often have low scalability and high programming complexity. If one thinks here in terms of decentralized reusable modules that bring their necessary logic directly into the devices, exchange process data with each other via the EDGE and are managed via the cloud, one obtains a scalable cost and space-saving solution that no longer requires a central PLC.
Speaker: Markus Weishaar | Dunkermotoren

Markus Weishaar holds a degree in electrical engineering (B. Eng.) at Ulm University of Applied Sciences in 2012. He supplemented it in 2017 with a degree in industrial engineering (M. Sc.) while working. After 11 years in packaging machine construction in various software and management functions, Markus has been Product Manager for IIoT and software at Dunkermotoren since May 2019.

12:00 PM
Joint Keynote ASE & i-edge: How a Cloud/Edge Paradigm is Disrupting the Automation Industry and Why Software is a Key Success Driver more
In the past decade, software-driven innovations, such as AI and machine learning, have revolutionized the information technology used in other areas of business and society. The result is an automation gap that creates barriers to unleashing the step-change productivity improvements that manufacturers aspire to achieve. Cloud/Edge computing as bridging technology enables true, enterprise ready, software
defined shopfloor solutions.

In this talk we discuss Six Key Ingredients of Shop-Floor Automation, or, how can manufacturers and automation providers use edge computing and cloud paradigms to enhance productivity on the shop floor?
Speaker: Johannes Boyne | Boston Consulting Group

Johannes joined the BCG group four years ago. Since July 1, 2020, he is Associate Director, located in the Munich office. With BCG, and it’s subsidiary BCG Digital Ventures, he worked on multiple end-to-end software-driven, and Cloud/IoT related business builds and product definitions. Before BCG, Johannes held multiple senior software management and engineering positions. Johannes designed one of the first Edge-powered industrial robot setups. Therefore, BCG and AWS cooperated on a demo setup for a large manufacturer and demonstrated the results on the HMI in 2017. It was one of the first AWS Greengrass installations.

12:45 PM
Lunch break

TRACK 1 IN THE AFTERNOON

1:45 PM

Track 1 in the afternoon
01:45 pm: Research project AIfES: Embedded AI, Hierarchical Models and Grey-box Approaches more
AIfES is a machine learning framework where the algorithms are optimized for resource-limited hardware such as microcontrollers. The integration of problem-based prior knowledge and hierarchical structures allows small and efficient implementations.

Probably the most frequently used method in machine learning (ML) are the artificial neural networks (ANN). The use of deep neural networks (DNN) has led to groundbreaking successes in the recent past. However, the use of ANNs on resource-limited hardware such as microcontrollers (μC) faces hurdles and limitations. Current software frameworks for machine learning (ML) are optimized for the PC and uses Python. This allows easy implementation and fast training of DNNs. However, large DNNs can only be implemented on microcontrollers to a limited extent and there is no standardized method for porting them yet. Some solutions are already available, e.g. STMicroelectronics offers the STM32Cube.AI® ecosystem for its own μCs, where pre-trained ANNs can be imported. Google® offers with TensorFlow® Lite for μC also a possibility to port pre-trained neural networks. Current solutions focus on porting a pre-trained ANN and usually require a 32-bit platform.

With its own research project AIfES (Artificial Intelligence for Embedded Systems) Fraunhofer IMS investigates the application of artificial intelligence on resource-limited systems. AIfES is a machine learning framework that can be run on almost any hardware platform, ranging from an 8-bit μC to a PC. The software framework was developed in the programming language C for maximum compatibility. It is standalone but also compatible with other ML tools such as TensorFlow®. By importing the structure and weights, an ANN can be easily replicated. It started with the implementation of a freely configurable feedforward neural network (FNN), whereby all areas from activation functions to memory management were optimized for use on μC. These measures even allow the training of a neural network on an embedded system. In order to use ANNs on μC or DSPs without floating-point arithmetic, the possibility of using fixed-point arithmetic was successfully implemented and investigated.

For the implementation of AI on μC not only optimized algorithms are needed. Optimization already starts with the feature extraction. For example, hierarchical models were worked on to reduce the size of the network. Here several small ANNs replace a large DNN if possible. The IMS developed for example a complex gesture recognition, which can be used e.g. for menu navigation in wearables. Different gestures may be required in the different menu areas, so that different small ANNs can be trained, each of which can handle a small selected number of gestures. In this way a large DNN can be replaced, which recognizes all gestures.

Another topic is grey-box approaches where previous knowledge from the application is included in the feature extraction. DNNs and big data approaches follow a black-box strategy, whereby a lot of information is processed and fed into a large DNN. Grey-box approaches should help to present only the necessary features to the ANN and keep the number of inputs as low as possible.
Speaker: Dr. Pierre Gembaczka | Fraunhofer IMS

Dr. Pierre Gembaczka is a scientific assistant since 2014. He studied Microtechnology and Medical Technology and held a Master's degree from the University of Applied Sciences in Gelsenkirchen. Afterward, he completed his doctorate at the Fraunhofer IMS in cooperation with the University of Duisburg Essen and obtained the academic degree of a doctor of engineering. From 2014 to 2017, he worked as a research assistant in the department Micro- and Nanosystems - Pressure Sensors at Fraunhofer IMS. Since 2018 he works in the embedded systems group at Fraunhofer IMS and researches embedded AI solutions for various applications. He is the primary developer of the AI software framework AIfES (Artificial Intelligence for Embedded Systems)

02:30 pm: Designing an AI Enabled Camera Device for the Edge more

Todays camera-enabled multimedia SoCs are getting more and more complicated. Embedded solution architects and system developers have to deal with various challenges on the hardware side and very extensive software frameworks from device drivers to networking stacks and full AI stacks on the other side. This speech will give an overview of how such a design might look like and tap in each of the needed building blocks, mainly from a software and system architectural point of view.


This speech aims to introduce the overall system architecture of a camera device, designed for usage at the edge and enable the audience to capture the most relevant today's technology topics. We will start by providing a hardware block diagram of such a system and look into the most critical modules to discuss their relevance for edge systems. Based on that block diagram, we will see how the hardware can be utilized and controlled from operation systems and how the most common interfaces look. Particular focus will be given to various operating system infrastructure to support certain features like camera, security-related functions, and networking. 


We need a more in-depth look to get a sense of why security is so critical, especially in edge-based AI systems – what are the implications and what needs to be secured. Based on that foundation, we will start looking into userspace software frameworks and the corresponding APIs to make use of the hardware features to allow on-device data processing, minimizing data to be stored or exchanged and certain technologies to connect the device to the cloud. 


We also need to talk about testing, why it is especially complicated to test AI-based systems, and what strategies we can use to perform those. How can we test connected devices and create environments which are "close to their real field of application" in terms of connection speed, connection reliability, power losses, and much more? We will close the lecture with an overview of typical use cases and their application-specific requirements.

Speaker: Dieter Kiermaier | Arrow

Dieter Kiermaier works with Arrow as a Technical Solution Architect. He started his career as a developer for embedded Linux systems. For six years, he is active in the area of electronic component distribution, working as Technology Field Application Engineer for High-End processors and System-on-modules. At the beginning of this year, he moved into the role of a Technical Solution Architect for Embedded Systems and Cloud-based solutions at eInfochips (an Arrow Company).

03:15 p.m.: SMART NEURO CHIP – Deep-Learning Computing on the Edge more
AVI has implemented a Deep-Learning hard- and software toolbox to realize deep-learning on the edge applications based on deep-learning accelerator software tools and a unique chip architecture to process data with high speed and very low power consumption.

Deep-Learning using convolutional neural networks (CNN) brings a huge improvement in accuracy and reliability for various tasks in automation and incident detection. There are many different concepts, like single shot detectors, that have been published for detecting objects in images or video streams. However, CNNs suffer from disadvantages regarding the deployment on embedded platforms such as re-configurable hardware like field programmable gate arrays (FPGAs). Due to the high computational intensity, memory requirements and arithmetic conditions, a variety of strategies for running CNNs on FPGAs or ASICS have been developed.

In addition to that functional safety and self awareness are becoming more and more important because, the user wants to trust on the information of the sensor system. This so called Safety-of-the-intended-function (SOTIF) is one of the core issues AVI is the second main requirement that should be solved with the FPGA IP-Core developed by AVI. Safety and sensor data in the meaning of guaranteed latency threshold is one of the specialities of AVI coming from the Camera-Monitor-Systems that are one of the major products in the product line RAILEYE. This know-how was applied to the Machine-Learning methods and algorithm in the on the edge solutions from AVI.
Being aware of this requirement we took the decision to develop our own software tools and IP-Core architecture. To get a prove of concept of this new technology AVI decided to realize a concrete application, which is the truck turn assistant “CAREYE SAFETY ANGLE” that is already on the market and has the required type approvals from the Kraftfahrzeug-Bundesamt (KBA). This product includes a deep-learning based object detector running on a less power consuming and fan less controller box.

The presented methods show our best practice approaches for example a TinyYOLOv3 detector network on a XILINX Artix-7 FPGA using our techniques like fusion of batch normalization, filter pruning and post training network quantization. Results will be demonstrated to compare the precision of the CNN after the optimization steps with YOLOv3 and other networks and in addition the performance in the manner of frames per Watt and frames per second on the XILINX ARTIC-7. An outlook will show the possibilities of the future applications that will be solved using this unique technology.
Speaker: Johannes Traxler | EYYES

Johannes Traxler attended the HTBLuVA for telecommunications and computer science in St. Pölten. Thereupon he studied technical physics and astronomy at the TU in Vienna. During his studies, he concentrated on “Artificial Intelligence“-research and development with machine learning, based on early recognition of tumor cells. In between 1999 and 2002, he worked as director of research and development at “ArtiBrain Forschungs- und Entwicklungs GmbH” to implement machine learning-based tunnel safety and incident detection systems for the first time. In 2013, he founded the “AVI-Systems GmbH,” where he started with deep-learning-based applications for manufacturing inline inspection systems in 2014. Since this period, he acted as CEO, invented new approaches for real-time video transmission with his team, and collaborated with different research organizations to develop deep-learning applications. Mr. Traxler is head of the task force “Videodetektion in VBA, “which is a part of the German Road and Transportation Research association and makes significant contributions to the development of technical standards and guidelines for Germany.

04:00 pm: 5G/AI/Edge Computing - Data management Challenges, Technologies and Architectures more

The worldwide shift towards digitization has placed immense pressure on connectivity. Over 20 billion Internet of Things (IoT) connected devices have already been installed, and the number is estimated to grow to 75.44 billion by 2025. IoT devices benefit all industry sectors, promoting higher levels of convenience, productivity, and communication. To keep up with the vast amounts of data we need to utilize the processing power on the IoT devices and make the IoT edge intelligent.

Edge computing solves two problems by merging them together into one solution. On the one hand, the constraint on cloud data centers to handle increasing amounts of data is about to reach a breaking point. On the other hand, Artificial Intelligence (AI) systems consume information in such a speed that there doesn’t seem to be enough. Edge databases enables applications to bring Machine Learning (ML) models to the edge .

Powered by real-time databases and AI, the intelligent edge can provide real-time insights for the improvement of many industries. Last-mile delivery is made faster and more efficient with features such as smart tracking and real-time route navigation. Intelligent systems can provide stylistic insights for customers as they browse the shop, and facial recognition software can be used for fraud detection. Edge intelligence can provide predictive risk assessments for healthcare professionals and promote better health awareness. There’s no limit as to the benefits the intelligent edge can provide, not even the cloud.

Speaker: David Nguyen (en) | Raima

David Nguyen is the Head of Engineering & QA at Raima. He started his career as a QA Software Engineer with work on the creation and maintenance of a fully automated testing QA framework and a complete daily build system. As the Director of Engineering he is leading the development in modernizing the database product line with a feature focus on AI, autonomous driving and edge IoT while ensuring the customer experience is as straightforward and easy to use as possible. David holds a B.S. in both Mathematics and Computer Science from the University of Washington. Photo: Attached.

04:45 p.m.: Factory Digitization through Decentralized Edge Computing in Practice more
Decentralized Edge-Computing is a new paradigm allowing to run solutions entirely without centralized server. It makes solutions ultra-reliable, performant and scalable and provides huge benefits when digitizing and automating factory processes.

The most prevalent edge-computing systems allow to pre-process or analyze data on the edge and send all or part of the data to a central server for data storage and analytics. Whilst this architecture has huge benefits in certain use-cases, it lacks the reliable and fast communication between edge-devices that is required when digitizing and automating factory processes on the shop-floor (think machine-to-machine communication). 

Decentralized edge-computing overcomes this limitation. It is a new paradigm, in which there is no central server at all and communication between edge-devices happens in a direct, peer-to-peer fashion. This makes solutions very reliable and scalable as there is no single point of failure or bottleneck. In the context of a factory, this allows direct machine-to-machine or machine-to-human communication without a central entity. With no infrastructure overhead, solutions can be implemented in a small scale, and later reliably scaled out. Max Fischer provides a deep dive into this new paradigm and shows how manufacturers such as PERI, Klöckner or PERI have benefited from this architecture to digitize its operations.
Speaker: Maximilian Fischer | Actyx

Max ist Maschinenbauingenieur und fasziniert davon, Fertigungsprozesse mit Software zu verbessern. Er schloss sei Studien an der ETH Zürich, Caltech, Duke und EPFL Lausanne ab. Bevor er Actyx mitbegründete, forschte er auf dem Gebiet der physikalischen Chemie.

TRACK 2 IN THE AFTERNOON

1:45 PM

Track 2 in the afternoon
01:45 p.m.: Condition Monitoring Solutions for Plants Considering Suitable Data Transmission Technologies (de) more
The presentation covers the following aspects:

  •     Internet access technologies and radio technologies for condition monitoring solutions: When is WLAN, Lora, Sigfox or mobile radio best suited; considered according to the application
  •     Process monitoring and automation with sensor solutions
  •     Challenges of radio
  •     Application example: Motor monitoring at Murrelektronik


Speaker: Thomas Schildknecht | Schildknecht AG

Thomas Schildknecht is the founder and CEO of Schildknecht AG, a specialist for industrial radio data transmission and the industrial Internet of Things. The company has developed a future-oriented radio system for industrial use to make secure and stable radio transmissions possible. Schildknecht is also a system provider in remote monitoring and remote maintenance, telemetry, and M2M.

02:30 pm: Reproducible Data Science with a Narrative Focus more
Building and optimizing a data analysis pipeline is a core activity for a company with a data-centric business model. Other companies - especially small and medium-sized enterprises – struggle by setting up a similar data analysis workflow. Reasons among others are that many of the tools employed by data-centric companies are expensive to buy or require constant maintenance by qualified personnel.

However, without a fixed and reproducible data analysis workflow, even medium-sized data sets can easily be overwhelming, and one can get lost in its details, missing obvious patterns.

In his talk, Nikolai Hlubek will present an example workflow that starts from scratch by the planning of laboratory experiments and ends with a trained machine-learning model on a microcontroller. This workflow will use freely available tools. The explorative data analysis is done in python and jupyter notebooks. The model building is done with keras. The optimization of a keras model for the microcontroller is done with X-CUBE-AI from ST.
Speaker: Dr. rer. nat. Nikolai Hlubek | Bürkert Fluid Control Systems

Nikolai Hlubek is a Research and Development Engineer at Bürkert Fluid Control Systems. He holds a Ph.D. in physics and has been doing data science for more than ten years. Nikolai is the winner of 2017 What The Data!? Hackathon.

03:15 p.m.: Functional Safety with AI more
Concerning functional safety, to integrate AI as a critical part of solutions, different scopes are currently in work. Because of its importance in the development of systems concerning functional safety, one scope is about the “development process and AI. It also seems to be the more simple part, because, in this field, there is much experience on how to design “proper” processes to develop functionally safe systems.

It is more challenging to define techniques and measures to follow when AI is to be designed as a critical part of functional safety. Fundamental research is needed to give the required confidence in such solutions integrating AI in functionally safe systems.

The presentation is about ongoing activities in the areas of “process design” and “techniques and measures.” It should result in starting a discussion if these activities will result in more confidence in solutions with AI as a critical part of functional safety.
Speaker: Frank Poignée | infoteam SET

Frank Poignée is Chief Engineer and Safety Consultant at infoteam SET. He is a Project Manager for industrial automation, medical engineering, and life science. Frank is also the Product Owner of the infoteam Functional Safety Management Process iFSM. Frank is ASQF/ISTQB Certified Tester and TÜV Functional Safety Engineer for Hard- and Software.

04:00 p.m.: 5G, AI and Heterogeneous Computing more
Qualcomm’s heterogeneous compute architecture is driving AI innovation to enable on-device AI use cases, and proactively leverages 5G to connect the edge and cloud altogether.

AI / Edge computing requires large, compute-intensive tasks and complicated neural network models to be executed in a constrained environment.
These challenges are overcome by using a number of specialized processors, CPU, GPU, DSP and NPU, which evolve with each generation.
Using  such  a heterogeneous computer architecture for the execution of sophisticated neural networks allows low power operation within thermal limits, by selecting the right block for the right task. Together with the brand-new Qualcomm Hexagon™ Tensor Accelerator, this is pushing 15 trillion operations per second(TOPS) with maximum efficiency to run complex AI and deep learning workloads at the Edge. Thanks to the support for 5G connectivity speeds , this creates a framework for the development of a wide range of next-generation  applications.


Speaker: Dominik Bohn

Dominik Bohn has been working for Atlantik Elektronik since his studies (B.Eng WI Electrical Engineering & M.Eng Information, Technology & Management). From 2013 on he was responsible for the embedded portfolio as Field Application Engineer. Since 2020 he is Product Manager for 5G & AI solutions based on Qualcomm & Thundercomm technologies.

04:45 pm: An Edge AI Case & Lessons learned more
A real world customer case on gesture controlled earphones with Edge AI and low-power radar
Speaker: Alexander Samuelsson (en) | Imagimob AB

Alexander Samuelsson is CTO of imagimob, which he founded together with Tony Hartley and Anders Hardebring in 2013. Alex has extensive experience in software development in areas such as mobile apps, mobile games and cloud systems. Previously he studied Computer Science at KTH Royal Institute of Technology. Imagimob offers artificial intelligence products for edge devices. Based in Stockholm, Sweden, the company has been serving customers within the automotive, manufacturing, healthcare and lifestyle industries since 2013.

5:30 PM
End of the Conference
*Subject to changes

Business Sponsor