The VipGPU project aims at developing new hardware and software technology to efficiently support cutting edge application scenarios, which have the potential of significant research, business, and financial gains: (a) a computer vision application for mobile robotics, and (b) a virtual reality application for medical training for surgical procedures. The project will deliver a FPGA prototype based on the multicore, heterogeneous GPUs of Think Silicon that will be optimized for the two use-cases.
More specifically, we have the following goals:
Enhanced Low Power GPUs and Hardware Accelerators. The first objective of the project is to develop the FPGA prototype consisting of very low power multiple GPUs and supporting the OpenCL 2.0 and Vulkan programming models. Also, customized low-level libraries will be created that will run on the GPU of Think Silicon and will be optimized based on its architecture. The FPGA prototype will include hardware accelerators for the most performance-critical functions of these two applications. Specialized accelerators can offer the greatest possible performance, since they are precisely tailored to the algorithm requirements. These accelerators will be integrated in Think Silicon’s GPUs.
Machine Vision in Mobile Robotic Systems. Mobile robotic systems are used, inter alia, for educational purposes or for supporting older people. The first application of the project will accurately determine the position of a robot and at the same time map its surroundings through visual processing. The computation of this information is a very important process of an autonomous robotic system, as it updates a series of other processes, such as gait design, perception and understanding of the surroundings, routing, etc. Highly accurate algorithms are based on computationally intensive processes, which prevent them from being applied to portable devices and small / medium-size robotic systems. In order to limit these power and computational dependencies, researchers very often resort to several compromises, which result in efficiency and robustness reductions of the proposed algorithms. In this project, we aim to develop a positioning system for embedded systems so as:
- to determine the exact position of a robot in real time conditions,
- to enable dynamic image processing, in poor lighting conditions and changing scenes,
- to create a low-power processing subsystem that will not use the processing power of the robot.
Virtual reality. The second application is to create an innovative environment for creating new generation training games that will take full advantage of new real-time virtual reality and motion detection technologies. Recognizing the current state of technology and market needs, the partners will expand their existing technology and know-how with new algorithms that will allow:
- Simulation in a virtual reality environment of an application for orthopedic surgeon training.
- The stereoscopic display in a new portable personal projection system based on the new very low-power GPUs that will not be connected to a personal computer, and
- The creation of educational games for medical surgeon training in a simplified way.
The expected results will lead to the creation of an ideal system for the visualization of medical simulation and experiential education in processes and events.