Korean
Energy-Efficient AI Hardware Technology Via a Brai..
Researchers demonstrate neuromodulation-inspired stashing system for the energy-efficient learning of a spiking neural network using a self-rectifying memristor array < Image: A schematic illustrating the localized brain activity (a-c) and the configuration of the hardware and software hybrid neural network (d-e) using a self-rectifying memristor array (f-g). > Researchers have proposed a novel system inspired by the neuromodulation of the brain, referred to as a ‘stashing system,’ that requires less energy consumption. The research group led by Professor Kyung Min Kim from the Department of Materials Science and Engineering has developed a technology that can efficiently handle mathematical operations for artificial intelligence by imitating the continuous changes in the topology of the neural network according to the situation. The human brain changes its neural topology in real time, learning to store or recall memories as needed. The research group presented a new artificial intelligence learning method that directly implements these neural coordination circuit configurations. Research on artificial intelligence is becoming very active, and the development of artificial intelligence-based electronic devices and product releases are accelerating, especially in the Fourth Industrial Revolution age. To implement artificial intelligence in electronic devices, customized hardware development should also be supported. However most electronic devices for artificial intelligence require high power consumption and highly integrated memory arrays for large-scale tasks. It has been challenging to solve these power consumption and integration limitations, and efforts have been made to find out how the human brain solves problems. To prove the efficiency of the developed technology, the research group created artificial neural network hardware equipped with a self-rectifying synaptic array and algorithm called a ‘stashing system’ that was developed to conduct artificial intelligence learning. As a result, it was able to reduce energy by 37% within the stashing system without any accuracy degradation. This result proves that emulating the neuromodulation in humans is possible. Professor Kim said, "In this study, we implemented the learning method of the human brain with only a simple circuit composition and through this we were able to reduce the energy needed by nearly 40 percent.” This neuromodulation-inspired stashing system that mimics the brain’s neural activity is compatible with existing electronic devices and commercialized semiconductor hardware. It is expected to be used in the design of next-generation semiconductor chips for artificial intelligence. This study was published in Advanced Functional Materials in March 2022 and supported by KAIST, the National Research Foundation of Korea, the National NanoFab Center, and SK Hynix. -Publication: Woon Hyung Cheong, Jae Bum Jeon†, Jae Hyun In, Geunyoung Kim, Hanchan Song, Janho An, Juseong Park, Young Seok Kim, Cheol Seong Hwang, and Kyung Min Kim (2022) “Demonstration of Neuromodulation-inspired Stashing System for Energy-efficient Learning of Spiking Neural Network using a Self-Rectifying Memristor Array,” Advanced Functional Materials March 31, 2022 (DOI: 10.1002/adfm.202200337) -Profile: Professor Kyung Min Kim http://semi.kaist.ac.kr https://scholar.google.com/citations?user=BGw8yDYAAAAJ&hl=ko Department of Materials Science and Engineering KAIST
Machine Learning-Based Algorithm to Speed up DNA S..
The algorithm presents the first full-fledged, short-read alignment software that leverages learned indices for solving the exact match search problem for efficient seeding < Image:Scientists from KAIST develops new machine-learning-based approach to speed up DNA sequencing. > The human genome consists of a complete set of DNA, which is about 6.4 billion letters long. Because of its size, reading the whole genome sequence at once is challenging. So scientists use DNA sequencers to produce hundreds of millions of DNA sequence fragments, or short reads, up to 300 letters long. Then the DNA sequencer assembles all the short reads like a giant jigsaw puzzle to reconstruct the entire genome sequence. Even with very fast computers, this job can take hours to complete. A research team at KAIST has achieved up to 3.45x faster speeds by developing the first short-read alignment software that uses a recent advance in machine-learning called a learned index. The research team reported their findings on March 7, 2022 in the journal Bioinformatics. The software has been released as open source and can be found on github (https://github.com/kaist-ina/BWA-MEME). Next-generation sequencing (NGS) is a state-of-the-art DNA sequencing method. Projects are underway with the goal of producing genome sequencing at population scale. Modern NGS hardware is capable of generating billions of short reads in a single run. Then the short reads have to be aligned with the reference DNA sequence. With large-scale DNA sequencing operations running hundreds of next-generation sequences, the need for an efficient short read alignment tool has become even more critical. Accelerating the DNA sequence alignment would be a step toward achieving the goal of population-scale sequencing. However, existing algorithms are limited in their performance because of their frequent memory accesses. BWA-MEM2 is a popular short-read alignment software package currently used to sequence the DNA. However, it has its limitations. The state-of-the-art alignment has two phases – seeding and extending. During the seeding phase, searches find exact matches of short reads in the reference DNA sequence. During the extending phase, the short reads from the seeding phase are extended. In the current process, bottlenecks occur in the seeding phase. Finding the exact matches slows the process. The researchers set out to solve the problem of accelerating the DNA sequence alignment. To speed the process, they applied machine learning techniques to create an algorithmic improvement. Their algorithm, BWA-MEME (BWA-MEM emulated) leverages learned indices to solve the exact match search problem. The original software compared one character at a time for an exact match search. The team’s new algorithm achieves up to 3.45x faster speeds in seeding throughput over BWA-MEM2 by reducing the number of instructions by 4.60x and memory accesses by 8.77x. “Through this study, it has been shown that full genome big data analysis can be performed faster and less costly than conventional methods by applying machine learning technology,” said Professor Dongsu Han from the School of Electrical Engineering at KAIST. The researchers’ ultimate goal was to develop efficient software that scientists from academia and industry could use on a daily basis for analyzing big data in genomics. “With the recent advances in artificial intelligence and machine learning, we see so many opportunities for designing better software for genomic data analysis. The potential is there for accelerating existing analysis as well as enabling new types of analysis, and our goal is to develop such software,” added Han. Whole genome sequencing has traditionally been used for discovering genomic mutations and identifying the root causes of diseases, which leads to the discovery and development of new drugs and cures. There could be many potential applications. Whole genome sequencing is used not only for research, but also for clinical purposes. “The science and technology for analyzing genomic data is making rapid progress to make it more accessible for scientists and patients. This will enhance our understanding about diseases and develop a better cure for patients of various diseases.” The research was funded by the National Research Foundation of the Korean government’s Ministry of Science and ICT. -Publication Youngmok Jung, Dongsu Han, “BWA-MEME:BWA-MEM emulated with a machine learning approach,” Bioinformatics, Volume 38, Issue 9, May 2022 (https://doi.org/10.1093/bioinformatics/btac137) -Profile Professor Dongsu Han School of Electrical Engineering KAIST
Decoding Brain Signals to Control a Robotic Arm
Advanced brain-machine interface system successfully interprets arm movement directions from neural signals in the brain < Figure:Experimental paradigm. Subjects were instructed to perform reach-and-grasp movements to designate the locations of the target in three-dimensional space. (a) Subjects A and B were provided the visual cue as a real tennis ball at one of four pseudo-randomized locations. (b) Subjects A and B were provided the visual cue as a virtual reality clip showing a sequence of five stages of a reach-and-grasp movement. > Researchers have developed a mind-reading system for decoding neural signals from the brain during arm movement. The method, described in the journal Applied Soft Computing, can be used by a person to control a robotic arm through a brain-machine interface (BMI). A BMI is a device that translates nerve signals into commands to control a machine, such as a computer or a robotic limb. There are two main techniques for monitoring neural signals in BMIs: electroencephalography (EEG) and electrocorticography (ECoG). The EEG exhibits signals from electrodes on the surface of the scalp and is widely employed because it is non-invasive, relatively cheap, safe and easy to use. However, the EEG has low spatial resolution and detects irrelevant neural signals, which makes it difficult to interpret the intentions of individuals from the EEG. On the other hand, the ECoG is an invasive method that involves placing electrodes directly on the surface of the cerebral cortex below the scalp. Compared with the EEG, the ECoG can monitor neural signals with much higher spatial resolution and less background noise. However, this technique has several drawbacks. “The ECoG is primarily used to find potential sources of epileptic seizures, meaning the electrodes are placed in different locations for different patients and may not be in the optimal regions of the brain for detecting sensory and movement signals,” explained Professor Jaeseung Jeong, a brain scientist at KAIST. “This inconsistency makes it difficult to decode brain signals to predict movements.” To overcome these problems, Professor Jeong’s team developed a new method for decoding ECoG neural signals during arm movement. The system is based on a machine-learning system for analysing and predicting neural signals called an ‘echo-state network’ and a mathematical probability model called the Gaussian distribution. In the study, the researchers recorded ECoG signals from four individuals with epilepsy while they were performing a reach-and-grasp task. Because the ECoG electrodes were placed according to the potential sources of each patient’s epileptic seizures, only 22% to 44% of the electrodes were located in the regions of the brain responsible for controlling movement. During the movement task, the participants were given visual cues, either by placing a real tennis ball in front of them, or via a virtual reality headset showing a clip of a human arm reaching forward in first-person view. They were asked to reach forward, grasp an object, then return their hand and release the object, while wearing motion sensors on their wrists and fingers. In a second task, they were instructed to imagine reaching forward without moving their arms. The researchers monitored the signals from the ECoG electrodes during real and imaginary arm movements, and tested whether the new system could predict the direction of this movement from the neural signals. They found that the novel decoder successfully classified arm movements in 24 directions in three-dimensional space, both in the real and virtual tasks, and that the results were at least five times more accurate than chance. They also used a computer simulation to show that the novel ECoG decoder could control the movements of a robotic arm. Overall, the results suggest that the new machine learning-based BCI system successfully used ECoG signals to interpret the direction of the intended movements. The next steps will be to improve the accuracy and efficiency of the decoder. In the future, it could be used in a real-time BMI device to help people with movement or sensory impairments. This research was supported by the KAIST Global Singularity Research Program of 2021, Brain Research Program of the National Research Foundation of Korea funded by the Ministry of Science, ICT, and Future Planning, and the Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education. -Publication Hoon-Hee Kim, Jaeseung Jeong, “An electrocorticographic decoder for arm movement for brain-machine interface using an echo state network and Gaussian readout,” Applied Soft Computing online December 31, 2021 (doi.org/10.1016/j.asoc.2021.108393) -Profile Professor Jaeseung Jeong Department of Bio and Brain Engineering College of Engineering KAIST
Improving speech intelligibility with Privacy-Pres..
Privacy-Preserving AR system can augment the speaker's speech with real-life subtitles to overcome the loss of contextual cues caused by mask-wearing and social distancing during the COVID-19 pandemic. Degraded speech intelligibility induces face-to-face conversation participants to speak louder and more distinctively, exposing the content to potential eavesdroppers. Similarly, people with face masks deteriorate their speech intelligibility, especially during the post-covid-19 crisis. Augmented Reality (AR) can serve as an effective tool to visualise people conversations and promote speech intelligibility, known as speech augmentation. However, visualised conversations without proper privacy management can expose AR users to privacy risks. An international research team of Prof. Lik-Hang LEE in the Department of Industrial and Systems Engineering at KAIST and Prof. Pan HUI in Computational Media and Arts at Hong Kong University of Science and Technology employed a conversation-oriented Contextual Integrity (CI) principle to develop a privacy-preserving AR framework for speech augmentation. At its core, the framework, namely Theophany, establishes ad-hoc social networks between relevant conversation participants to exchange contextual information and improve speech intelligibility in real-time. < Figure 1: A real-life subtitle application with AR headsets > Theophany has been implemented as a real-life subtitle application in AR to improve speech intelligibility in daily conversations (Figure 1). This implementation leverages a multi-modal channel, such as eye-tracking, camera, and audio. Theophany transforms the user's speech into text and estimates the intended recipients through gaze detection. The CI Enforcer module evaluates the sentences' sensitivity. If the sensitivity meets the speaker's privacy threshold, the sentence is transmitted to the appropriate recipients (Figure 2). < Figure 2: Multi-modal Contextual Integrity Channel > Based on the principles of Contextual Integrity (CI), parameters of privacy perception are designed for privacy-preserving face-to-face conversations, such as topic, location, and participants. Accordingly, Theophany operation depends on the topic and session. Figure 3 demonstrates several illustrative conversation sessions: (a) the topic is not sensitive and transmitted to everybody in the user's gaze. (b) the topic is work-sensitive and only transmitted to the coworker. (c) the topic is sensitive and only transmitted to the friend in the user's gaze. A new friend entering the user's gaze only gets the textual transcription once a new session (topic) starts (d). (e) the topic is highly sensitive, and nobody gets the textual transcription. < Figure 3: Speech Augmentation in five illustrative sessions > Theophany within a prototypical AR system augments the speaker's speech with real-life subtitles to overcome the loss of contextual cues caused by mask-wearing and social distancing during the COVID-19 pandemic. The research was published in ACM Multimedia under the title of 'Theophany: Multi-modal Speech Augmentation in Instantaneous Privacy Channels' (DOI: 10.1145/3474085.3475507), being selected as one of the best paper award candidates (Top 5). Note that the first author is an alumnus from the Industrial and Systems Engineering Department at KAIST. Short Bio: Lik-Hang Lee received a PhD degree from SyMLab, Hong Kong University of Science and Technology, and the Bachelor's and M.Phil. degrees from the University of Hong Kong. He is currently an assistant professor (tenure-track) with the Korea Advanced Institute of Science and Technology (KAIST), South Korea, and the head of the Augmented Reality and Media Laboratory, KAIST. He has built and designed various human-centric computing specializing in augmented and virtual realities (AR/VR). In recent years, he has published more than 30 research papers on AR/VR at prestigious conferences such as ACM WWW, ACM IMWUT, ACM Multimedia, ACM CSUR, IEEE Percom, and so on. He also serves the research community, as TPCs, PCs and workshop organizers, at some prestigious venues, such as AAAI, IJCAI, IEEE PERCOM, ACM CHI, ACM Multimedia, ACM IMWUT, IEEE VR, etc. Photo:
Eco-Friendly Micro-Supercapacitors Using Fallen Le..
Femtosecond micro-supercapacitors on a single leaf could easily be applied to wearable electronics, smart houses, and IoTs < Image: The schematic illustration of the production of femtosecond laser-induced graphene. > A KAIST research team has developed a graphene-inorganic-hybrid micro-supercapacitor made of leaves using femtosecond direct laser writing lithography. The advancement of wearable electronic devices is synonymous with innovations in flexible energy storage devices. Of the various energy storage devices, micro-supercapacitors have drawn a great deal of interest for their high electrical power density, long lifetimes, and short charging times. However, there has been an increase in waste battery generation with the increases in the consumption and use of electronic equipment as well as the short replacement period that follows advancements in mobile devices. The safety and environmental issues involved in the collection, recycling, and processing of such waste batteries are creating a number of challenges. Forests cover about 30 percent of the Earth’s surface, producing a huge amount of fallen leaves. This naturally occurring biomass comes in large quantities and is both biodegradable and reusable, which makes it an attractive, eco-friendly material. However, if the leaves are left neglected instead of being used efficiently, they can contribute to fires or water pollution. To solve both problems at once, a research team led by Professor Young-Jin Kim from the Department of Mechanical Engineering and Dr. Hana Yoon from the Korea Institute of Energy Research developed a one-step technology that can create porous 3D graphene micro-electrodes with high electrical conductivity without additional treatment in atmospheric conditions by irradiating femtosecond laser pulses on the surface of the leaves without additional materials. Taking this strategy further, the team also suggested a method for producing flexible micro-supercapacitors. They showed that this technique could quickly and easily produce porous graphene-inorganic-hybrid electrodes at a low price, and validated their performance by using the graphene micro-supercapacitors to power an LED and an electronic watch that could function as a thermometer, hygrometer, and timer. These results open up the possibility of the mass production of flexible and green graphene-based electronic devices. Professor Young-Jin Kim said, “Leaves create forest biomass that comes in unmanageable quantities, so using them for next-generation energy storage devices makes it possible for us to reuse waste resources, thereby establishing a virtuous cycle.” This research was published in Advanced Functional Materials last month and was sponsored by the Ministry of Agriculture Food and Rural Affairs, the Korea Forest Service, and the Korea Institute of Energy Research. -Publication Truong-Son Dinh Le, Yeong A. Lee, Han Ku Nam, Kyu Yeon Jang, Dongwook Yang, Byunggi Kim, Kanghoon Yim, Seung Woo Kim, Hana Yoon, and Young-jin Kim, “Green Flexible Graphene-Inorganic-Hybrid Micro-Supercapacitors Made of Fallen Leaves Enabled by Ultrafast Laser Pulses," December 05, 2021, Advanced Functional Materials (doi.org/10.1002/adfm.202107768) -Profile Professor Young-Jin Kim Ultra-Precision Metrology and Manufacturing (UPM2) Laboratory Department of Mechanical Engineering KAIST
AI Light-Field Camera Reads 3D Facial Expressions
Machine-learned, light-field camera reads facial expressions from high-contrast illumination invariant 3D facial images < Image: Facial expression reading based on MLP classification from 3D depth maps and 2D images obtained by NIR-LFC > A joint research team led by Professors Ki-Hun Jeong and Doheon Lee from the KAIST Department of Bio and Brain Engineering reported the development of a technique for facial expression detection by merging near-infrared light-field camera techniques with artificial intelligence (AI) technology. Unlike a conventional camera, the light-field camera contains micro-lens arrays in front of the image sensor, which makes the camera small enough to fit into a smart phone, while allowing it to acquire the spatial and directional information of the light with a single shot. The technique has received attention as it can reconstruct images in a variety of ways including multi-views, refocusing, and 3D image acquisition, giving rise to many potential applications. However, the optical crosstalk between shadows caused by external light sources in the environment and the micro-lens has limited existing light-field cameras from being able to provide accurate image contrast and 3D reconstruction. The joint research team applied a vertical-cavity surface-emitting laser (VCSEL) in the near-IR range to stabilize the accuracy of 3D image reconstruction that previously depended on environmental light. When an external light source is shone on a face at 0-, 30-, and 60-degree angles, the light field camera reduces 54% of image reconstruction errors. Additionally, by inserting a light-absorbing layer for visible and near-IR wavelengths between the micro-lens arrays, the team could minimize optical crosstalk while increasing the image contrast by 2.1 times. Through this technique, the team could overcome the limitations of existing light-field cameras and was able to develop their NIR-based light-field camera (NIR-LFC), optimized for the 3D image reconstruction of facial expressions. Using the NIR-LFC, the team acquired high-quality 3D reconstruction images of facial expressions expressing various emotions regardless of the lighting conditions of the surrounding environment. The facial expressions in the acquired 3D images were distinguished through machine learning with an average of 85% accuracy – a statistically significant figure compared to when 2D images were used. Furthermore, by calculating the interdependency of distance information that varies with facial expression in 3D images, the team could identify the information a light-field camera utilizes to distinguish human expressions. Professor Ki-Hun Jeong said, “The sub-miniature light-field camera developed by the research team has the potential to become the new platform to quantitatively analyze the facial expressions and emotions of humans.” To highlight the significance of this research, he added, “It could be applied in various fields including mobile healthcare, field diagnosis, social cognition, and human-machine interactions.” This research was published in Advanced Intelligent Systems online on December 16, under the title, “Machine-Learned Light-field Camera that Reads Facial Expression from High-Contrast and Illumination Invariant 3D Facial Images.” This research was funded by the Ministry of Science and ICT and the Ministry of Trade, Industry and Energy. -Publication “Machine-learned light-field camera that reads fascial expression from high-contrast and illumination invariant 3D facial images,” Sang-In Bae, Sangyeon Lee, Jae-Myeong Kwon, Hyun-Kyung Kim. Kyung-Won Jang, Doheon Lee, Ki-Hun Jeong, Advanced Intelligent Systems, December 16, 2021 (doi.org/10.1002/aisy.202100182) -Profile Professor Ki-Hun Jeong Biophotonic Laboratory Department of Bio and Brain Engineering KAIST Professor Doheon Lee Department of Bio and Brain Engineering KAIST
Face Detection in Untrained Deep Neural Networks
A KAIST team shows that primitive visual selectivity of faces can arise spontaneously in completely untrained deep neural networks Researchers have found that higher visual cognitive functions can arise spontaneously in untrained neural networks. A KAIST research team led by Professor Se-Bum Paik from the Department of Bio and Brain Engineering has shown that visual selectivity of facial images can arise even in completely untrained deep neural networks. This new finding has provided revelatory insights into mechanisms underlying the development of cognitive functions in both biological and artificial neural networks, also making a significant impact on our understanding of the origin of early brain functions before sensory experiences. The study published in Nature Communications on December 16 demonstrates that neuronal activities selective to facial images are observed in randomly initialized deep neural networks in the complete absence of learning, and that they show the characteristics of those observed in biological brains. The ability to identify and recognize faces is a crucial function for social behavior, and this ability is thought to originate from neuronal tuning at the single or multi-neuronal level. Neurons that selectively respond to faces are observed in young animals of various species, and this raises intense debate whether face-selective neurons can arise innately in the brain or if they require visual experience. Using a model neural network that captures properties of the ventral stream of the visual cortex, the research team found that face-selectivity can emerge spontaneously from random feedforward wirings in untrained deep neural networks. The team showed that the character of this innate face-selectivity is comparable to that observed with face-selective neurons in the brain, and that this spontaneous neuronal tuning for faces enables the network to perform face detection tasks. These results imply a possible scenario in which the random feedforward connections that develop in early, untrained networks may be sufficient for initializing primitive visual cognitive functions. Professor Paik said, “Our findings suggest that innate cognitive functions can emerge spontaneously from the statistical complexity embedded in the hierarchical feedforward projection circuitry, even in the complete absence of learning”. He continued, “Our results provide a broad conceptual advance as well as advanced insight into the mechanisms underlying the development of innate functions in both biological and artificial neural networks, which may unravel the mystery of the generation and evolution of intelligence.” This work was supported by the National Research Foundation of Korea (NRF) and by the KAIST singularity research project. -Publication Seungdae Baek, Min Song, Jaeson Jang, Gwangsu Kim, and Se-Bum Baik, “Face detection in untrained deep neural network,” Nature Communications 12, 7328 on Dec.16, 2021 (https://doi.org/10.1038/s41467-021-27606-9) -Profile Professor Se-Bum Paik Visual System and Neural Network Laboratory Program of Brain and Cognitive Engineering Department of Bio and Brain Engineering College of Engineering KAIST
Connecting the Dots to Find New Treatments for Bre..
Systems biologists uncovered new ways of cancer cell reprogramming to treat drug-resistant cancers < Professor Kwang-Hyun Cho and colleagues have developed a mathematical model and identified optimal targets reprogramming basal-like cancer cells into hormone therapy-responsive luminal-A cells by deciphering the complex molecular interactions within these cells through a systems biological approach. > Scientists at KAIST believe they may have found a way to reverse an aggressive, treatment-resistant type of breast cancer into a less dangerous kind that responds well to treatment. The study involved the use of mathematical models to untangle the complex genetic and molecular interactions that occur in the two types of breast cancer, but could be extended to find ways for treating many others. The study’s findings were published in the journal Cancer Research. Basal-like tumours are the most aggressive type of breast cancer, with the worst prognosis. Chemotherapy is the only available treatment option, but patients experience high recurrence rates. On the other hand, luminal-A breast cancer responds well to drugs that specifically target a receptor on their cell surfaces, called estrogen receptor alpha (ERα). KAIST systems biologist Kwang-Hyun Cho and colleagues analyzed the complex molecular and genetic interactions of basal-like and luminal-A breast cancers to find out if there might be a way to switch the former to the latter and give patients a better chance to respond to treatment. To do this, they accessed large amounts of cancer and patient data to understand which genes and molecules are involved in the two types. They then input this data into a mathematical model that represents genes, proteins and molecules as dots and the interactions between them as lines. The model can be used to conduct simulations and see how interactions change when certain genes are turned on or off. “There have been a tremendous number of studies trying to find therapeutic targets for treating basal-like breast cancer patients,” says Cho. “But clinical trials have failed due to the complex and dynamic nature of cancer. To overcome this issue, we looked at breast cancer cells as a complex network system and implemented a systems biological approach to unravel the underlying mechanisms that would allow us to reprogram basal-like into luminal-A breast cancer cells.” Using this approach, followed by experimental validation on real breast cancer cells, the team found that turning off two key gene regulators, called BCL11A and HDAC1/2, switched a basal-like cancer signalling pathway into a different one used by luminal-A cancer cells. The switch reprograms the cancer cells and makes them more responsive to drugs that target ERα receptors. However, further tests will be needed to confirm that this also works in animal models and eventually humans. “Our study demonstrates that the systems biological approach can be useful for identifying novel therapeutic targets,” says Cho. The researchers are now expanding its breast cancer network model to include all breast cancer subtypes. Their ultimate aim is to identify more drug targets and to understand the mechanisms that could drive drug-resistant cells to turn into drug-sensitive ones. This work was supported by the National Research Foundation of Korea, the Ministry of Science and ICT, Electronics and Telecommunications Research Institute, and the KAIST Grand Challenge 30 Project. -Publication Sea R. Choi, Chae Young Hwang, Jonghoon Lee, and Kwang-Hyun Cho, “Network Analysis Identifies Regulators of Basal-like Breast Cancer Reprogramming and Endocrine Therapy Vulnerability,” Cancer Research, November 30. (doi:10.1158/0008-5472.CAN-21-0621) -Profile Professor Kwang-Hyun Cho Laboratory for Systems Biology and Bio-Inspired Engineering Department of Bio and Brain Engineering KAIST
A Study Shows Reactive Electrolyte Additives Impro..
Stable electrode-electrolyte interfaces constructed by fluorine- and nitrogen-donating ionic additives provide an opportunity to improve high-performance lithium metal batteries < A combination of lithium difluoro (bisoxalato) phosphate as an F donor and lithium nitrate as an N donor with different electron accepting abilities and adsorption tendencies improves the cycle performance of Li|NCM811 full cells through the creation of a dual-layer SEI on a Li metal anode and a protective CEI on a Ni-rich cathode. > A research team showed that electrolyte additives increase the lifetime of lithium metal batteries and remarkably improve the performance of fast charging and discharging. Professor Nam-Soon Choi’s team from the Department of Chemical and Biomolecular Engineering at KAIST hierarchized the solid electrolyte interphase to make a dual-layer structure and showed groundbreaking run times for lithium metal batteries. The team applied two electrolyte additives that have different reduction and adsorption properties to improve the functionality of the dual-layer solid electrolyte interphase. In addition, the team has confirmed that the structural stability of the nickel-rich cathode was achieved through the formation of a thin protective layer on the cathode. This study was reported in Energy Storage Materials. Securing high-energy-density lithium metal batteries with a long lifespan and fast charging performance is vital for realizing their ubiquitous use as superior power sources for electric vehicles. Lithium metal batteries comprise a lithium metal anode that delivers 10 times higher capacity than the graphite anodes in lithium-ion batteries. Therefore, lithium metal is an indispensable anode material for realizing high-energy rechargeable batteries. However, undesirable reactions among the electrolytes with lithium metal anodes can reduce the power and this remains an impediment to achieving a longer battery lifespan. Previous studies only focused on the formation of the solid electrolyte interphase on the surface of the lithium metal anode. The team designed a way to create a dual-layer solid electrolyte interphase to resolve the instability of the lithium metal anode by using electrolyte additives, depending on their electron accepting ability and adsorption tendencies. This hierarchical structure of the solid electrolyte interphase on the lithium metal anode has the potential to be further applied to lithium-alloy anodes, lithium storage structures, and anode-free technology to meet market expectations for electrolyte technology. The batteries with lithium metal anodes and nickel-rich cathodes represented 80.9% of the initial capacity after 600 cycles and achieved a high Coulombic efficiency of 99.94%. These remarkable results contributed to the development of protective dual-layer solid electrolyte interphase technology for lithium metal anodes. Professor Choi said that the research suggests a new direction for the development of electrolyte additives to regulate the unstable lithium metal anode-electrolyte interface, the biggest hurdle in research on lithium metal batteries. She added that anode-free secondary battery technology is expected to be a game changer in the secondary battery market and electrolyte additive technology will contribute to the enhancement of anode-free secondary batteries through the stabilization of lithium metal anodes. This research was funded by the Technology Development Program to Solve Climate Change of the National Research Foundation in Korea funded by the Ministry of Science, ICT & Future Planning and the Technology Innovation Program funded by the Ministry of Trade, Industry & Energy, and Hyundai Motor Company. - Publication Saehun Kim, Sung O Park, Min-Young Lee, Jeong-A Lee, Imanuel Kristanto, Tae Kyung Lee, Daeyeon Hwang, Juyoung Kim, Tae-Ung Wi, Hyun-Wook Lee, Sang Kyu Kwak, and Nam Soon Choi, “Stable electrode-electrolyte interfaces constructed by fluorine- and nitrogen-donating ionic additives for high-performance lithium metal batteries,” Energy Storage Materials, 45, 1-13 (2022), (doi: https://doi.org/10.1016/j.ensm.2021.10.031) - Profile Professor Nam-Soon Choi Energy Materials Laboratory Department of Chemical and Biomolecular Engineering KAIST
Deep Learning Framework to Enable Material Design ..
Researchers propose a deep neural network-based forward design space exploration using active transfer learning and data augmentation < Figure 1: Schematic of deep learning framework for material design space exploration. Schematic of gradual expansion of reliable prediction domain of DNN based on the addition of data generated from the hyper-heuristic genetic algorithm and active transfer learning. > A new study proposed a deep neural network-based forward design approach that enables an efficient search for superior materials far beyond the domain of the initial training set. This approach compensates for the weak predictive power of neural networks on an unseen domain through gradual updates of the neural network with active transfer learning and data augmentation methods. Professor Seungwha Ryu believes that this study will help address a variety of optimization problems that have an astronomical number of possible design configurations. For the grid composite optimization problem, the proposed framework was able to provide excellent designs close to the global optima, even with the addition of a very small dataset corresponding to less than 0.5% of the initial training data-set size. This study was reported in npj Computational Materials last month. “We wanted to mitigate the limitation of the neural network, weak predictive power beyond the training set domain for the material or structure design,” said Professor Ryu from the Department of Mechanical Engineering. Neural network-based generative models have been actively investigated as an inverse design method for finding novel materials in a vast design space. However, the applicability of conventional generative models is limited because they cannot access data outside the range of training sets. Advanced generative models that were devised to overcome this limitation also suffer from weak predictive power for the unseen domain. Professor Ryu’s team, in collaboration with researchers from Professor Grace Gu’s group at UC Berkeley, devised a design method that simultaneously expands the domain using the strong predictive power of a deep neural network and searches for the optimal design by repetitively performing three key steps. First, it searches for few candidates with improved properties located close to the training set via genetic algorithms, by mixing superior designs within the training set. Then, it checks to see if the candidates really have improved properties, and expands the training set by duplicating the validated designs via a data augmentation method. Finally, they can expand the reliable prediction domain by updating the neural network with the new superior designs via transfer learning. Because the expansion proceeds along relatively narrow but correct routes toward the optimal design (depicted in the schematic of Fig. 1), the framework enables an efficient search. As a data-hungry method, a deep neural network model tends to have reliable predictive power only within and near the domain of the training set. When the optimal configuration of materials and structures lies far beyond the initial training set, which frequently is the case, neural network-based design methods suffer from weak predictive power and become inefficient. Researchers expect that the framework will be applicable for a wide range of optimization problems in other science and engineering disciplines with astronomically large design space, because it provides an efficient way of gradually expanding the reliable prediction domain toward the target design while avoiding the risk of being stuck in local minima. Especially, being a less-data-hungry method, design problems in which data generation is time-consuming and expensive will benefit most from this new framework. The research team is currently applying the optimization framework for the design task of metamaterial structures, segmented thermoelectric generators, and optimal sensor distributions. “From these sets of on-going studies, we expect to better recognize the pros and cons, and the potential of the suggested algorithm. Ultimately, we want to devise more efficient machine learning-based design approaches,” explained Professor Ryu.This study was funded by the National Research Foundation of Korea and the KAIST Global Singularity Research Project. -Publication Yongtae Kim, Youngsoo, Charles Yang, Kundo Park, Grace X. Gu, and Seunghwa Ryu, “Deep learning framework for material design space exploration using active transfer learning and data augmentation,” npj Computational Materials (https://doi.org/10.1038/s41524-021-00609-2) -Profile Professor Seunghwa Ryu Mechanics & Materials Modeling Lab Department of Mechanical Engineering KAIST
A Mechanism Underlying Most Common Cause of Epilep..
An interdisciplinary study shows that neurons carrying somatic mutations in MTOR can lead to focal epileptogenesis via non-cell-autonomous hyperexcitability of nearby nonmutated neurons < Image 1: Neurons carrying somatic mutations in MTOR lead to focal epileptogenesis via non-cell autonomous hyperexcitability of nearby non-mutated neurons. (Left) Neurons with mTOR mutation (green) observed in a mouse brain section image. (Middle) Network model consisting of a small portion of mutated and a large portion of nearby non-mutated neurons. (Right) Mitigated hyperactivity of non-mutated neurons after the treatment of inhibitor of adenosine kinase. > During fetal development, cells should migrate to the outer edge of the brain to form critical connections for information transfer and regulation in the body. When even a few cells fail to move to the correct location, the neurons become disorganized and this results in focal cortical dysplasia. This condition is the most common cause of seizures that cannot be controlled with medication in children and the second most common cause in adults. Now, an interdisciplinary team studying neurogenetics, neural networks, and neurophysiology at KAIST has revealed how dysfunctions in even a small percentage of cells can cause disorder across the entire brain. They published their results on June 28 in Annals of Neurology. The work builds on a previous finding, also by a KAIST scientists, who found that focal cortical dysplasia was caused by mutations in the cells involved in mTOR, a pathway that regulates signaling between neurons in the brain. “Only 1 to 2% of neurons carrying mutations in the mTOR signaling pathway that regulates cell signaling in the brain have been found to include seizures in animal models of focal cortical dysplasia,” said Professor Jong-Woo Sohn from the Department of Biological Sciences. “The main challenge of this study was to explain how nearby non-mutated neurons are hyperexcitable.” Initially, the researchers hypothesized that the mutated cells affected the number of excitatory and inhibitory synapses in all neurons, mutated or not. These neural gates can trigger or halt activity, respectively, in other neurons. Seizures are a result of extreme activity, called hyperexcitability. If the mutated cells upend the balance and result in more excitatory cells, the researchers thought, it made sense that the cells would be more susceptible to hyperexcitability and, as a result, seizures. “Contrary to our expectations, the synaptic input balance was not changed in either the mutated or non-mutated neurons,” said Professor Jeong Ho Lee from the Graduate School of Medical Science and Engineering. “We turned our attention to a protein overproduced by mutated neurons.” The protein is adenosine kinase, which lowers the concentration of adenosine. This naturally occurring compound is an anticonvulsant and works to relax vessels. In mice engineered to have focal cortical dysplasia, the researchers injected adenosine to replace the levels lowered by the protein. It worked and the neurons became less excitable. “We demonstrated that augmentation of adenosine signaling could attenuate the excitability of non-mutated neurons,” said Professor Se-Bum Paik from the Department of Bio and Brain Engineering. The effect on the non-mutated neurons was the surprising part, according to Paik. “The seizure-triggering hyperexcitability originated not in the mutation-carrying neurons, but instead in the nearby non-mutated neurons,” he said. The mutated neurons excreted more adenosine kinase, reducing the adenosine levels in the local environment of all the cells. With less adenosine, the non-mutated neurons became hyperexcitable, leading to seizures. “While we need further investigate into the relationship between the concentration of adenosine and the increased excitation of nearby neurons, our results support the medical use of drugs to activate adenosine signaling as a possible treatment pathway for focal cortical dysplasia,” Professor Lee said. The Suh Kyungbae Foundation, the Korea Health Technology Research and Development Project, the Ministry of Health & Welfare, and the National Research Foundation in Korea funded this work. -Publication: Koh, H.Y., Jang, J., Ju, S.H., Kim, R., Cho, G.-B., Kim, D.S., Sohn, J.-W., Paik, S.-B. and Lee, J.H. (2021), ‘Non–Cell Autonomous Epileptogenesis in Focal Cortical Dysplasia’ Annals of Neurology, 90: 285 299. (https://doi.org/10.1002/ana.26149) -Profile Professor Jeong Ho Lee Translational Neurogenetics Lab https://tnl.kaist.ac.kr/ Graduate School of Medical Science and Engineering KAIST Professor Se-Bum Paik Visual System and Neural Network Laboratory http://vs.kaist.ac.kr/ Department of Bio and Brain Engineering KAIST Professor Jong-Woo Sohn Laboratory for Neurophysiology, https://sites.google.com/site/sohnlab2014/home Department of Biological Sciences KAIST Dr. Hyun Yong Koh Translational Neurogenetics Lab Graduate School of Medical Science and Engineering KAIST Dr. Jaeson Jang Ph.D. Visual System and Neural Network Laboratory Department of Bio and Brain Engineering KAIST Sang Hyeon Ju M.D. Laboratory for Neurophysiology Department of Biological Sciences KAIST
Brain-Inspired Highly Scalable Neuromorphic Hardwa..
Neurons and synapses based on single transistor can dramatically reduce the hardware cost and accelerate the commercialization of neuromorphic hardware < Single transistor neurons and synapses fabricated using a standard silicon CMOS process. They are co-integrated on the same 8-inch wafer. > KAIST researchers fabricated a brain-inspired highly scalable neuromorphic hardware by co-integrating single transistor neurons and synapses. Using standard silicon complementary metal-oxide-semiconductor (CMOS) technology, the neuromorphic hardware is expected to reduce chip cost and simplify fabrication procedures. The research team led by Yang-Kyu Choi and Sung-Yool Choi produced a neurons and synapses based on single transistor for highly scalable neuromorphic hardware and showed the ability to recognize text and face images. This research was featured in Science Advances on August 4. Neuromorphic hardware has attracted a great deal of attention because of its artificial intelligence functions, but consuming ultra-low power of less than 20 watts by mimicking the human brain. To make neuromorphic hardware work, a neuron that generates a spike when integrating a certain signal, and a synapse remembering the connection between two neurons are necessary, just like the biological brain. However, since neurons and synapses constructed on digital or analog circuits occupy a large space, there is a limit in terms of hardware efficiency and costs. Since the human brain consists of about 1011 neurons and 1014 synapses, it is necessary to improve the hardware cost in order to apply it to mobile and IoT devices. To solve the problem, the research team mimicked the behavior of biological neurons and synapses with a single transistor, and co-integrated them onto an 8-inch wafer. The manufactured neuromorphic transistors have the same structure as the transistors for memory and logic that are currently mass-produced. In addition, the neuromorphic transistors proved for the first time that they can be implemented with a ‘Janus structure’ that functions as both neuron and synapse, just like coins have heads and tails. Professor Yang-Kyu Choi said that this work can dramatically reduce the hardware cost by replacing the neurons and synapses that were based on complex digital and analog circuits with a single transistor. "We have demonstrated that neurons and synapses can be implemented using a single transistor," said Joon-Kyu Han, the first author. "By co-integrating single transistor neurons and synapses on the same wafer using a standard CMOS process, the hardware cost of the neuromorphic hardware has been improved, which will accelerate the commercialization of neuromorphic hardware,” Han added.This research was supported by the National Research Foundation (NRF) and IC Design Education Center (IDEC). -Publication Joon-Kyu Han, Sung-Yool Choi, Yang-Kyu Choi, et al.“Cointegration of single-transistor neurons and synapses by nanoscale CMOS fabrication for highly scalable neuromorphic hardware,” Science Advances (DOI: 10.1126/sciadv.abg8836) -Profile Professor Yang-Kyu Choi Nano-Oriented Bio-Electronics Lab https://sites.google.com/view/nobelab/ School of Electrical Engineering KAIST Professor Sung-Yool Choi Molecular and Nano Device Laboratory https://www.mndl.kaist.ac.kr/ School of Electrical Engineering KAIST