Labs such as the Australian Centre for Visual Technologies at The University of Adelaide are working on advanced 3D object visualization and recognition for applications such as 3D scanning, augmented reality, robotics and autonomous driving with the help of NVIDIA Tesla GPU accelerators.
The GPUs process large volumes of data orders of magnitude faster than traditional CPUs, and provide the horsepower to run complex simulations more quickly than previously possible.
The University of Adelaide, one of Australia’s leading research universities, is making use of machine learning, artificial intelligence (AI) and techniques like structure from motion (SfM) to ensure that intelligent systems like robots or self-driving cars can accurately analyze the scenes they encounter. Without this capability, it would be impossible for a car to differentiate vehicles from pedestrians or to decide where the road ends and the curb begins.
Ravi Garg, Senior Research Associate, Australian Centre for Visual Technologies, The University of Adelaide, explains that scenes can be broken down into geometric shapes that can then be identified as objects no matter how they are rotated. Once properties such as size, speed, and direction of movement are assigned to an object, intelligent systems such as a robot or a car can then react appropriately. It will even be possible for such systems to reconstruct 3D objects from limited views of a scene.
“My background is mostly related to structure from motion where we see multiple images from multiple viewpoints,” said Garg. “What we want to do is to understand the geometry of a scene. What we are working on now is to not only to use machine learning and AI as tools to give inputs, outputs, and develop mappings, but also to look at how can we can achieve consistent results in new situations and generate more insights into learning.”
Garg is working with Professor Ian Reid on his Laureate Fellowship project named “Lifelong Computer Vision Systems”. The project aims to develop robust computer vision systems that can operate over a wide area and over long periods, as an environment changes over time.
The ultimate goal says Garg, is to create self-learning systems which can collect and analyse scenes automatically. “At The University of Adelaide we are working on unsupervised learning systems which are very applicable to healthcare, where there is heavy reliance on experts for decision-making. Instead of asking an expert to diagnose millions of data points we can provide an initial screening of large collections of medical data. We could have systems which can help doctors classify tumors or even assist in invasive surgery,” Garg said.
Garg’s research is made possible through state-of-the-art horsepower from the university’s Phoenix supercomputer, which went live in 2016. The supercomputer is based on Lenovo technology and boosted with NVIDIA Tesla GPU accelerators to handle demanding high performance computing (HPC) workloads. Phoenix has cut down on the time spend waiting for HPC resources at The University of Adelaide and facilitated faster research outcomes.