Jae Sun Seo

Jae Sun Seo

Jae Sun Seo

Jae Sun Seo received his B.S. from Seoul National University and both M.S. and Ph.D. degrees in electrical engineering from the University of Michigan, respectively. Now an associate professor in Arizona State University’s School of ECEE, his research specializes in energy-efficient hardware design for machine learning and neuromorphic computing.

His research focuses on energy-efficient hardware design for deep learning and neuromorphic computing.

Machine learning advances have enabled computers to match or surpass human performance on cognitive tasks such as vision and natural language processing, yet implementation in conventional von Neumann-style information processing architectures still requires significant energy usage and is orders of magnitude more costly than its biological equivalent. To overcome this disparity requires researching alternative architectures, devices, and algorithms that better emulate neurons and synapses for efficient information processing.

Machine learning applications relying on massive amounts of data are increasingly demanding energy-efficient hardware to train and complete real world tasks. In particular, Von Neumann architecture’s computation/memory communication bottleneck dominates area and power consumption when applied to deep neural networks (DNNs). Utilizing non-Von Neumann architectures that bring compute closer to memory can significantly lower data communication costs; Jae Sun Seo’s research specializes in designing such hardware while optimizing DNN architectures to reach optimal performance while remaining energy-efficient performance/energy-efficiency.

His research also centers around designing FPGA-based accelerators to train and execute machine learning algorithms, with the goal of creating an automatically compilable, fine-grain reconfigurable FPGA that can accelerate both training and run DNNs at the edge. He is exploring structured sparsity and low-precision quantization for memory compression as well as energy efficient DNNs.

Recently, he has worked on an accelerator using resistive crossbar arrays that is capable of significantly decreasing memory footprint with only 0.6% accuracy loss compared to a floating point-based DNN model. These results demonstrate that DNN-specific memory compression and structured sparsity is more effective at reducing DNN energy/area consumption at edge locations than general purpose compression techniques.

An important project involves accelerating drone detection on resource-constrained FPGAs, an essential technology to enable autonomous vehicles to safely navigate our roads. The team is creating a highly-efficient auto-compilable reconfigurable algorithm for rapid computation of drone images that AVs can use to detect obstructions and obstacles on our roadways; and taking advantage of RRAM memory capabilities in this endeavor so as to decrease its size significantly.

He is interested in FEOL in-memory computing devices.

Traditional Boolean computing based on von Neumann architecture can present challenges to emerging applications that demand massive data parallelism, due to limited communication bandwidth between memory units and logic units. One solution may be adopting brain-inspired neuromorphic computing paradigms which provide faster memory access at lower power consumption and cost. But in order to implement such paradigms at scale, novel devices with suitable properties may be required.

Ferroelectric transistors and memory (FeFET/FeRAM) represent one such exciting new technology. Relying on physical mechanisms that have yet to be commercially exploited, these technologies offer opportunities for nonvolatile memory storage solutions, combined logic/memory functions, neuromorphic computing as well as neuromorphic computing applications.

FeFETs/FeRAMs must be designed to switch on and off quickly while remaining low power consumption in order to function in in-memory computing devices. A key way of accomplishing this goal is placing switches in the BEOL rather than the FEOL process; doing so won’t interfere with optimal metal routing achieved in FEOL, as well as allow vertical stacking of cells for three dimensional memories. Finding an accommodating BEOL process remains the challenge here –

He is a member of TinyML Research Symposium Technical Program Committee.

The tinyML research symposium offers researchers an opportunity to present and discuss emerging work on low-power embedded machine learning systems that operate with limited resources – typically milliwatt range or below. Its scope naturally aligns with neuromorphic engineering which seeks to mimic biological systems’ sensing and processing methods while exploiting any limited resources that might exist.

The 2021 Low Power Machine Learning Symposium will take place as part of Embedded Systems Week conference in San Francisco, CA and will consist of one track of invited talks by international experts from academia, industry, start-ups and government labs. It provides a rare opportunity for low power Machine Learning experts from various backgrounds to come together and share knowledge on recent advances in algorithms, technology and architecture.

ML technologies are enabling an ever-increasing number of applications to operate at the edge of devices with reduced code, data, and power footprints. This growth is being propelled by widespread availability of sensor hardware coupled with powerful machine learning algorithms and technologies which efficiently run them, as well as recent progress made towards designing deeply embedded systems that combine algorithms, architecture, and technology into cohesive wholes.

Effectively executing machine learning algorithms at the intersection of algorithms, technology and architecture presents a difficult challenge; aggressive algorithms must be optimized while still meeting performance, real-time operation and privacy objectives. Optimizations often necessitate tradeoffs such as quantization, pruning and analog computation – often to detrimental affect on model performance – yet offer opportunities in terms of emerging technology such as nonvolatile memory storage devices and in-memory computing as well as novel architectures in terms of dataflow choices, sparsity support mechanisms coupled to core/memory couplings/caching schemes etc.

Audio offers tinyML an attractive target with its variety of applications in recognition, event detection/trigger, speech transformation/generation and more. However, its dense information density poses challenges to performance, power consumption and memory consumption for tiny edge devices; possible solutions to reduce these constraints may include tightly integrated end-to-end platforms, data augmentation techniques or new optimization tool approaches.

He is a member of the International Technical Program Committee for ISSCC.

His research is focused on energy-efficient hardware design for machine learning algorithms and neuromorphic computing, memory circuit design for next-generation processors and microprocessor accelerators, technical program committee service for conferences such as ISSCC, MLSys, DAC DATE ICCAD. He currently serves as Associate Editor for IEEE Transactions on VLSI Systems and IEEE Solid State Circuits Letters.

Prof. Seo was recently selected as guest editor for an IEEE Journal of Solid-State Circuits special issue dedicated to “Advances in Clock Generation and Voltage Regulation Techniques for System-on-Chip (SoC). Congrats!

Prof. Seo and Xiao-yang Zhao published a paper entitled, “A Neuromorphic Spike Clustering Processor for Deep Brain Sensing and Stimulation Applications”, in the latest issue of IEEE Transactions on Neural Networks and Learning Systems. Congratulations!

Professor Seo was invited to present at the International Symposium on New Frontiers of Scientific Innovation organized by Korea Foundation of Advanced Studies and Chosun Ilbo on July 10, 2023. Congratulations!

Professor Seo has been appointed to serve on the International Technical Program Committee of ISSCC! Congratulations on your selection!

Congratulations Teng Yang and Doyum Kim on having their paper on an in-field technique for NBTI degradation in register files accepted and published in IEEE Transactions on VLSI Systems! Congratulations!

Jae Sun Seo earned his B.S. from Seoul National University in 2001 before receiving M.S. and Ph.D. degrees from the University of Michigan in 2006 and 2010, all in electrical engineering. During graduate research internships at Intel circuit research lab and Sun Microsystems VLSI research group; also receiving the Samsung Scholarship Award as well as NSF CAREER awards.

The-Leaderboard-728×90.gif

Posted

in

by

Tags:

Share This