Korean

KAIST Develops AI Crowd Prediction Technology to P..
<(From Left) Ph.D candidate Youngeun Nam from KAIST, Professor Jae-Gil Lee from KAIST, Ji-Hye Na from KAIST, (Top right, from left) Professor Soo-Sik Yoon from Korea University, Professor HwanJun Song from KAIST> To prevent crowd crush incidents like the Itaewon tragedy, it's crucial to go beyond simply counting people and to instead have a technology that can detect the real- inflow and movement patterns of crowds. A KAIST research team has successfully developed new AI crowd prediction technology that can be used not only for managing large-scale events and mitigating urban traffic congestion but also for responding to infectious disease outbreaks. On the 17th, KAIST (President Kwang Hyung Lee) announced that a research team led by Professor Jae-Gil Lee from the School of Computing has developed a new AI technology that can more accurately predict crowd density. The dynamics of crowd gathering cannot be explained by a simple increase or decrease in the number of people. Even with the same number of people, the level of risk changes depending on where they are coming from and which direction they are heading. Professor Lee's team expressed this movement using the concept of a "time-varying graph." This means that accurate prediction is only possible by simultaneously analyzing two types of information: "node information" (how many people are in a specific area) and "edge information" (the flow of people between areas). In contrast, most previous studies focused on only one of these factors, either concentrating on "how many people are gathered right now" or "which paths are people moving along." However, the research team emphasized that combining both is necessary to truly capture a dangerous situation. For example, a sudden increase in density in a specific alleyway, such as Alley A, is difficult to predict with just "current population" data. But by also considering the flow of people continuously moving from a nearby area, Area B, towards Area A (edge information), it's possible to pre-emptively identify the signal that "Area A will soon become dangerous." To achieve this, the team developed a "bi-modal learning" method. This technology simultaneously considers population counts (node information) and population flow (edge information), while also learning spatial relationships (which areas are connected) and temporal changes (when and how movement occurs). Specifically, the team introduced a 3D contrastive learning technique. This allows the AI to learn not only 2D spatial (geographical) information but also temporal information, creating a 3D relationship. As a result, the AI can understand not just whether the population is "large or small right now," but "what pattern the crowd is developing into over time." This allows for a much more accurate prediction of the time and place where congestion will occur than previous methods. <Figure 1. Workflow of the bi-modal learning-based crowd congestion risk prediction developed by the research team. The research team developed a crowd congestion risk prediction model based on bi-modal learning. The vertex-based time series represents indicator changes in a specific area (e.g., increases or decreases in crowd density), while the edge-based time series captures the flow of population movement between areas over time. Although these two types of data are collected from different sources, they are mapped onto the same network structure and provided together as input to the AI model. During training, the model simultaneously leverages both vertex and edge information based on a shared network, allowing it to capture complex movement patterns that might be overlooked when relying on only a single type of data. For example, a sudden increase in crowd density in a particular area may be difficult to predict using vertex information alone, but by additionally considering the steady inflow of people from adjacent areas (edge information), the prediction becomes more accurate. In this way, the model can precisely identify future changes based on past and present information, ultimately predicting high-risk crowd congestion areas in advance.> The research team built and publicly released six real-world datasets for their study, which were compiled from sources such as Seoul, Busan, and Daegu subway data, New York City transit data, and COVID-19 confirmed case data from South Korea and New York. The proposed technology achieved up to a 76.1% improvement in prediction accuracy over recent state-of-the-art methods, demonstrating strong perf Professor Jae-Gil Lee stated, "It is important to develop technologies that can have a significant social impact," adding, "I hope this technology will greatly contribute to protecting public safety in daily life, such as in crowd management for large events, easing urban traffic congestion, and curbing the spread of infectious diseases." Youngeun Nam, a Ph.D candidate in the KAIST School of Computing, was the first author of the study, and Jihye Na, another Ph.D candidate, was a co-author. The research findings were presented at the Knowledge Discovery and Data Mining (KDD) 2025 conference, a top international conference in the field of data mining, this past August. ※ Paper Title: Bi-Modal Learning for Networked Time Series ※ DOI: https://doi.org/10.1145/3711896.3736856 This technology is the result of research projects including the "Mid-Career Researcher Project" (RS-2023-NR077002, Core Technology Research for Crowd Management Systems Based on AI and Mobility Big Data) and the "Human-Centered AI Core Technology Development Project" (RS-2022-II220157, Robust, Fair, and Scalable Data-Centric Continuous Learning).

The Fall of Tor for Just $2: A Solution to the Tor..
<(From Left) Ph.D candidate Jinseo Lee, Hobin Kim, Professor Min Suk Kang> KAIST research team has made a new milestone in global security research, becoming the first Korean research team to identify a security vulnerability in Tor, the world's largest anonymous network, and propose a solution. On September 12, our university's Professor Min Suk Kang's research team from the School of Computing announced that they had received an Honorable Mention Award at the USENIX Security 2025 conference, held from August 13 to 15 in Seattle, USA. The USENIX Security conference is one of the world's most prestigious conferences in information security, ranking first among all security and cryptography conferences and journals based on the Google Scholar h-5 index. The Honorable Mention Award is a highly regarded honor given to only about 6% of all papers. The core of this research was the discovery of a new denial-of-service (DoS) attack vulnerability in Tor, the world's largest anonymous network, and the proposal of a method to resolve it. The Tor Onion Service, a key technology for various anonymity-based services, is a primary tool for privacy protection, used by millions of people worldwide every day. The research team found that Tor's congestion-sensing mechanism is insecure and proved through a real-world network experiment that a website could be crippled for as little as $2. This is just 0.2% of the cost of existing attacks. The study is particularly notable as it was the first to show that the existing security measures implemented in Tor to prevent DoS attacks can actually make the attacks worse. In addition, the team used mathematical modeling to uncover the principles behind this vulnerability and provided guidelines for Tor to maintain a balance between anonymity and availability. These guidelines have been shared with the Tor development team and are currently being applied through a phased patch. A new attack model proposed by the research team shows that when an attacker sends a tiny, pre-designed amount of attack traffic to a Tor website, it confuses the congestion measurement system. This triggers an excessive congestion control, which ultimately prevents regular users from accessing the website. The research team proved through experiments that the cost of this attack is only 0.2% of existing methods. In February, Tor founder Roger Dingledine visited KAIST and discussed collaboration with the research team. In June, the Tor administration paid a bug bounty of approximately $800 in appreciation for the team's proactive report. "Tor anonymity system security is an area of active global research, but this is the first study on security vulnerabilities in Korea, which makes it very significant," said Professor Kang Min-seok. "The vulnerability we identified is very high-risk, so it received significant attention from many Tor security researchers at the conference. We will continue our comprehensive research, not only on enhancing the Tor system's anonymity but also on using Tor technology in the field of criminal investigation." The research was conducted by Ph.D. candidate Jinseo Lee (first author), and former master's student Hobin Kim at the KAIST Graduate School of Information Security and a current Ph.D. candidate at Carnegie Mellon University (second author). The paper is titled "Onions Got Puzzled: On the Challenges of Mitigating Denial-of-Service Problems in Tor Onion Services." https://www.usenix.org/conference/usenixsecurity25/presentation/lee This achievement was recognized as a groundbreaking, first-of-its-kind study on Tor security vulnerabilities in Korea and played a decisive role in the selection of Professor Kang's lab for the 2025 Basic Research Program (Global Basic Research Lab) by the Ministry of Science and ICT. < Photo 2. Presentation photo of Ph.D cadidate Jinseo Lee from School of Computing> Through this program, the research team plans to establish a domestic research collaboration system with Ewha Womans University and Sungshin Women's University and expand international research collaborations with researchers in the U.S. and U.K. to conduct in-depth research on Tor vulnerabilities and anonymity over the next three years. < Photo 3. Presentation photo of Ph.D cadidate Jinseo Lee from School of Computing>

World's First Quantum Computing for Lego-like Desi..
<(From Left to Right)Professor Jihan Kim, Ph.D. candidate Sinyoung Kang, Ph.D. candidate Younghoon Kim from the Department of Chemical and Biomolecular Engineering> Multivariate Porous Materials (MTV) are like a 'collection of Lego blocks,' allowing for customized design at a molecular level to freely create desired structures. Using these materials enables a wide range of applications, including energy storage and conversion, which can significantly contribute to solving environmental problems and advancing next-generation energy technologies. Our research team has, for the first time in the world, introduced quantum computing to solve the difficult problem of designing complex MTVs, opening an innovative path for the development of next-generation catalysts, separation membranes, and energy storage materials. On September 9, Professor Jihan Kim's research team at our university's Department of Chemical and Biomolecular Engineering announced the development of a new framework that uses a quantum computer to efficiently explore the design space of millions of multivariate porous materials (hereafter, MTV). MTV porous materials are structures formed by the combination of two or more organic ligands (linkers) and building block materials like metal clusters. They have great potential for use in the energy and environmental fields. Their diverse compositional combinations llow for the design and synthesis of new structures. Examples include gas adsorption, mixed gas separation, sensors, and catalysts. However, as the number of components increases, the number of possible combinations grows exponentially. It has been impossible to design and predict the properties of complex MTV structures using the conventional method of checking every single structure with a classical computer. The research team represented the complex porous structure as a 'network (graph) drawn on a map' and then converted each connection point and block type into qubits that a quantum computer can handle. They then asked the quantum computer to solve the problem: "Which blocks should be arranged at what ratio to create the most stable structure?" <Figure1. Overall schematics of the quantum computing algorithm to generate feasible MTV porous materials. The algorithm consists of two mapping schemes (qubit mapping and topology mapping) to allocate building blocks in a given connectivity. Different configurations go through a predetermined Hamiltonian, which is comprised of a ratio term, occupancy term, and balance term, to capture the most feasible MTV porous material> Because quantum computers can calculate multiple possibilities simultaneously, it's like spreading out millions of Lego houses at once and quickly picking out the sturdiest one. This allows them to explore a vast number of possibilities—which a classical computer would have to calculate one by one—with far fewer resources. The research team also conducted experiments on four different MTV structures that have been previously reported. The results from the simulation and the IBM quantum computer were identical, demonstrating that the method "actually works well." <Figure2. VQE sampling results for experimental structures and the structures that reproduce them, using IBM Qiskit's classical simulator. The experimental structure is predicted to be the most probable outcome of the VQE algorithm's calculation, meaning it will be generated as the most stable form of the structure.> In the future, the team plans to combine this method with machine learning to expand it into a platform that considers not only simple structural design but also synthesis feasibility, gas adsorption performance, and electrochemical properties simultaneously. Professor Jihan Kim said, "This research is the first case to solve the bottleneck of complex multivariate porous material design using quantum computing." He added, "This achievement is expected to be widely applied as a customized material design technology in fields where precise composition is key, such as carbon capture and separation, selective catalytic reactions, and ion-conducting electrolytes, and it can be flexibly expanded to even more complex systems in the future." Ph.D. candidates Sinyoung Kang and Younghoon Kim of the Department of Chemical and Biomolecular Engineering participated as co-first authors in this study. The research results were published in the online edition of the international journal ACS Central Science on August 22. Paper Title: Quantum Computing Based Design of Multivariate Porous Materials DOI: https://doi.org/10.1021/acscentsci.5c00918 Meanwhile, this research was supported by the Ministry of Science and ICT's Mid-Career Researcher Support Program and the Heterogeneous Material Support Program.

Making Truly Smart AI Agents a Reality with the Wo..
<(From Left) Engineer Jeongho Park from GraphAI, Ph.D candidate Geonho Lee, Prof. Min-Soo Kim from KAIST> For a long time, companies have been using relational databases (DB) to manage data. However, with the increasing use of large AI models, integration with graph databases is now required. This process, however, reveals limitations such as cost burden, data inconsistency, and the difficulty of processing complex queries. Our research team has succeeded in developing a next-generation graph-relational DB system that can solve these problems at once, and it is expected to be applied to industrial sites immediately. When this technology is applied, AI will be able to reason about complex relationships in real time, going beyond simple searches, making it possible to implement a smarter AI service. The research team led by Professor Min-Soo Kim announced on the 8th of September that the team has developed a new DB system named 'Chimera' that fully integrates relational DB and graph DB to efficiently execute graph-relational queries. Chimera has proven its world-class performance by processing queries at least 4 times and up to 280 times faster than existing systems in international performance standard benchmarks. Unlike existing relational DBs, graph DBs have a structure that represents data as vertices (nodes) and edges (connections), which gives them a strong advantage in analyzing and reasoning about complexly intertwined information like people, events, places, and time. Thanks to this feature, its use is rapidly spreading in various fields such as AI agents, SNS, finance, and e-commerce. With the growing demand for complex query processing between relational DBs and graph DBs, a new standard language, 'SQL/PGQ,' which extends relational query language (SQL) with graph query functions, has also been proposed. SQL/PGQ is a new standard language that adds graph traversal capabilities to the existing database language (SQL) and is designed to query both table-like data and connected information such as people, events, and places at once. Using this, complex relationships such as 'which company does my friend's friend work for?' can be searched much more simply than before. <Diagram (a): This diagram shows the typical architecture of a graph query processing system based on a traditional RDBMS. It has separate dedicated operators for graph traversal and an in-memory graph structure, while attribute joins are handled by relational operators. However, this structure makes it difficult to optimize execution plans for hybrid queries because traversal and joins are performed in different pipelines. Additionally, for large-scale graphs, the in-memory structure creates memory constraints, and the method of extracting graph data from relational data limits data freshness. Diagram (b): This diagram shows Chimera's integrated architecture. Chimera introduces new components to the existing RDBMS architecture: a traversal-join operator that combines graph traversal and joins, a disk-based graph storage, and a dedicated graph access layer. This allows it to process both graph and relational data within a single execution flow. Furthermore, a hybrid query planner integrally optimizes both graph and relational operations. Its shared transaction management and disk-based storage structure enable it to handle large-scale graph databases without memory constraints while maintaining data freshness. This architecture removes the bottlenecks of existing systems by flexibly combining traversal, joins, and mappings in a single execution plan, thereby simultaneously improving performance and scalability.> The problem is that existing approaches have relied on either trying to mimic graph traversal with join operations or processing by pre-building a graph view in memory. In the former case, performance drops sharply as the traversal depth increases, and in the latter case, execution fails due to insufficient memory even if the data size increases slightly. Furthermore, changes to the original data are not immediately reflected in the view, resulting in poor data freshness and the inefficiency of having to combine relational and graph results separately. KAIST research team's 'Chimera' fundamentally solves these limitations. The research team redesigned both the storage layer and the query processing layer of the database. First, the research team introduced a 'dual-store structure' that operates a graph-specific storage and a relational data storage together. They then applied a 'traversal-join operator' that processes graph traversal and relational operations simultaneously, allowing complex operations to be executed efficiently in a single system. Thanks to this, Chimera has established itself as the world's first graph-relational DB system that integrates the entire process from data storage to query processing into one. As a result, it recorded world-class performance on the international performance standard benchmark 'LDBC Social Network Benchmark (SNB),' being at least 4 times and up to 280 times faster than existing systems. Query failure due to insufficient memory does not occur no matter how large the graph data becomes, and since it does not use views, there is no delay problem in terms of data freshness. Professor Min-Soo Kim stated, "As the connections between data become more complex, the need for integrated technology that encompasses both graph and relational DBs is increasing. Chimera is a technology that fundamentally solves this problem, and we expect it to be widely used in various industries such as AI agents, finance, and e-commerce." The study was co-authored by Geonho Lee, a Ph.D. student in KAIST School of Computing, as the first author, and Jeongho Park, an engineer at Professor Kim's startup GraphAI Co., Ltd., as the second author, with Professor Kim as the corresponding author. The research results were presented on September 1st at VLDB, a world-renowned international academic conference in the field of databases. In particular, the newly developed Chimera technology is expected to have an immediate industrial impact as a core technology for implementing 'high-performance AI agents based on RAG (a smart AI assistant with search capabilities),' which will be applied to 'AkasicDB,' a vector-graph-relational DB system scheduled to be released by GraphAI Co., Ltd. *Paper title: Chimera: A System Design of Dual Storage and Traversal-Join Unified Query Processing for SQL/PGQ *DOI: https://dl.acm.org/doi/10.14778/3705829.3705845 This research was supported by the Ministry of Science and ICT's IITP SW Star Lab and the National Research Foundation of Korea's Mid-Career Researcher Program.

KAIST Develops Smart Patch That Can Run Tests Usin..
<(From Left) Ph.D candidate Jaehun Jeon, Professor Ki-Hun Jeong of the Department of Bio and Brain Engineering> An era is opening where it's possible to precisely assess the body’s health status using only sweat instead of blood tests. A KAIST research team has developed a smart patch that can precisely observe internal changes through sweat when simply attached to the body. This is expected to greatly contribute to the advancement of chronic disease management and personalized healthcare technologies. KAIST (President Kwang Hyung Lee) announced on September 7th that a research team led by Professor Ki-Hun Jeong of the Department of Bio and Brain Engineering has developed a wearable sensor that can simultaneously and in real-time analyze multiple metabolites in sweat. Recently, research on wearable sensors that analyze metabolites in sweat to monitor the human body’s precise physiological state has been actively pursued. However, conventional “label-based” sensors, which require fluorescent tags or staining, and “label-free” methods have faced difficulties in effectively collecting and controlling sweat. Because of this, there have been limitations in precisely observing metabolite changes over time in actual human subjects. <Figure 1. Flexible microfluidic nanoplasmonic patch (left). Sequential sample collection using the patch (center) and label-free metabolite profiling (right). In this study, we designed and fabricated a fully flexible nanoplasmonic microfluidic patch for label-free sweat analysis and performed SERS signal measurement and analysis directly from human sweat. Through this, we propose a platform capable of precisely identifying physiological changes induced by physical activity and dietary conditions.> To overcome these limitations, the research team developed a thin and flexible wearable sweat patch that can be directly attached to the skin. This patch incorporates both microchannels for collecting sweat and an ultrafine nanoplasmonic structure* that label-freely analyzes sweat components using light. Thanks to this, multiple sweat metabolites can be simultaneously analyzed without the need for separate staining or labels, with just one patch application. * Nanoplasmonic structure: An optical sensor structure where nanoscale metallic patterns interact with light, designed to sensitively detect the presence or changes in concentration of molecules in sweat. The patch was created by combining nanophotonics technology, which manipulates light at the nanometer scale (one-hundred-thousandth the thickness of a human hair) to read molecular properties, with microfluidics technology, which precisely controls sweat in channels thinner than a hair. In other words, within a single sweat patch, microfluidic technology enables sweat to be collected sequentially over time, allowing for the measurement of changes in various metabolites without any labeling process. Inside the patch are six to seventeen chambers (storage spaces), and sweat secreted during exercise flows along the microfluidic structures and fills each chamber in order. <Figure 2. Example of the fabricated patch worn (left) and images of sequential sweat collection and storage (right). By designing precise microfluidic channels based on capillary burst valves, sequential sweat collection was implemented, which enabled label-free analysis of metabolite changes associated with exercise and diet.> The research team applied the patch to actual human subjects and succeeded in continuously tracking the changing components of sweat over time during exercise. Previously, only about two components could be checked simultaneously through a label-free approach, but in this study, they demonstrated for the first time in the world that three metabolites—uric acid, lactic acid, and tyrosine—can be quantitatively analyzed simultaneously, as well as how they change depending on exercise and diet. In particular, by using artificial intelligence analysis methods, they were able to accurately distinguish signals of desired substances even within the complex components of sweat. <Figure 3. Label-free analysis graphs of metabolite changes in sweat induced by exercise. Using the fabricated patch in combination with a machine learning model, metabolite concentrations in the sweat of actual subjects were analyzed. Comparison of sweat samples collected before and after consumption of a purine-rich diet, under exercise conditions, revealed label-free detection of changes in uric acid and tyrosine levels, as well as exercise-induced lactate increase. Validation experiments using commercial kits further confirmed the quantification accuracy, supporting the clinical applicability of this platform> Professor Ki-Hun Jeong said, “This research lays the foundation for precisely monitoring internal metabolic changes over time without blood sampling by combining nanophotonics and microfluidics technologies.” He added, “In the future, it can be expanded to diverse fields such as chronic disease management, drug response tracking, environmental exposure monitoring, and the discovery of next-generation biomarkers for metabolic diseases.” This research was conducted with Jaehun Jeon, a PhD student, as the first author and was published online in Nature Communications on August 27. Paper Title: “All-Flexible Chronoepifluidic Nanoplasmonic Patch for Label-Free Metabolite Profiling in Sweat” DOI: https://doi.org/10.1038/s41467-025-63510-2 This achievement was supported by the National Research Foundation of Korea, the Ministry of Science and ICT, the Ministry of Health and Welfare, and the Ministry of Trade, Industry and Energy.

Batteries Make 12Minute Charge for 800km Drive a R..
<Photo 1. (From left in the front row) Dr. Hyeokjin Kwon from Chemical and Biomolecular Engineering, Professor Hee Tak Kim, and Professor Seong Su Kim from Mechanical Engineering> Korean researchers have ushered in a new era for electric vehicle (EV) battery technology by solving the long-standing dendrite problem in lithium-metal batteries. While conventional lithium-ion batteries are limited to a maximum range of 600 km, the new battery can achieve a range of 800 km on a single charge, a lifespan of over 300,000 km, and a super-fast charging time of just 12 minutes. KAIST (President Kwang Hyung Lee) announced on the 4th of September that a research team from the Frontier Research Laboratory (FRL), a joint project between Professor Hee Tak Kim from the Department of Chemical and Biomolecular Engineering, and LG Energy Solution, has developed a "cohesion-inhibiting new liquid electrolyte" original technology that can dramatically increase the performance of lithium-metal batteries. Lithium-metal batteries replace the graphite anode, a key component of lithium-ion batteries, with lithium metal. However, lithium metal has a technical challenge known as dendrite, which makes it difficult to secure the battery's lifespan and stability. Dendrites are tree-like lithium crystals that form on the anode surface during battery charging, negatively affecting battery performance and stability. This dendrite phenomenon becomes more severe during rapid charging and can cause an internal short-circuit, making it very difficult to implement a lithium-metal battery that can be recharged under fast-charging conditions. The FRL joint research team has identified that the fundamental cause of dendrite formation during rapid charging of lithium metal is due to non-uniform interfacial cohesion on the surface of the lithium metal. To solve this problem, they developed a "cohesion-inhibiting new liquid electrolyte." The new liquid electrolyte utilizes an anion structure with a weak binding affinity to lithium ions (Li⁺), minimizing the non-uniformity of the lithium interface. This effectively suppresses dendrite growth even during rapid charging. This technology overcomes the slow charging speed, which was a major limitation of existing lithium-metal batteries, while maintaining high energy density. It enables a long driving range and stable operation even with fast charging. Je-Young Kim, CTO of LG Energy Solution, said, "The four years of collaboration between LG Energy Solution and KAIST through FRL are producing meaningful results. We will continue to strengthen our industry-academia collaboration to solve technical challenges and create the best results in the field of next-generation batteries." <Figure 1. Infographic on the KAIST-LGES FRL Lithium-Metal Battery Technology> Hee Tak Kim, Professor from Chemical and Biomolecular Engineering at KAIST, commented, "This research has become a key foundation for overcoming the technical challenges of lithium-metal batteries by understanding the interfacial structure. It has overcome the biggest barrier to the introduction of lithium-metal batteries for electric vehicles." The study, with Dr. Hyeokjin Kwon from the KAIST Department of Chemical and Biomolecular Engineering as the first author, was published in the prestigious journal Nature Energy on September 3. Nature Energy: According to the Journal Impact Factor announced by Clarivate Analytics in 2024, it ranks first among 182 energy journals and 23rd among more than 21,000 journals overall. Article Title: Covariance of interphasic properties and fast chargeability of energy-dense lithium metal batteries DOI: 10.1038/s41560-025-01838-1 The research was conducted through the Frontier Research Laboratory (FRL, Director Professor Hee Tak Kim), which was established in 2021 by KAIST and LG Energy Solution to develop next-generation lithium-metal battery technology.

KAIST Unlocks the Secret of Next-Generation Memory..
<(From Left) Professor Sang-Hee Ko Park, Ph.D candidate Sunghwan Park, Ph.D candidate Chaewon Gong, Professor Seungbum Hong> Resistive Random Access Memory (ReRAM), which is based on oxide materials, is gaining attention as a next-generation memory and neuromorphic computing device. Its fast speeds, data retention ability, and simple structure make it a promising candidate to replace existing memory technologies. KAIST researchers have now clarified the operating principle of this memory, which is expected to provide a key clue for the development of high-performance, high-reliability next-generation memory. KAIST (President Kwang Hyung Lee) announced on the 2nd of September that a research team led by Professor Seungbum Hong from the Department of Materials Science and Engineering, in collaboration with a research team led by Professor Sang-Hee Ko Park from the same department, has for the first time in the world precisely clarified the operating principle of an oxide-based memory device, which is drawing attention as a core technology for next-generation semiconductors. Using a 'Multi-modal Scanning Probe Microscope (Multi-modal SPM)' that combines several types of microscopes*, the research team succeeded in simultaneously observing the electron flow channels inside the oxide thin film, the movement of oxygen ions, and changes in surface potential (the distribution of charge on the material's surface). Through this, they clarified the correlation between how current changes and how oxygen defects change during the process of writing and erasing information in the memory. *Several types of microscopes: Conductive atomic force microscopy (C-AFM) for observing current flow, electrochemical strain microscopy (ESM) for observing oxygen ion movement, and Kelvin probe force microscopy (KPFM) for observing potential changes. With this special equipment, the research team directly implemented the process of writing and erasing information in the memory by applying an electrical signal to a titanium dioxide (TiO2) thin film, confirming at the nano-level that the reason for the current changes was the variation in the distribution of oxygen defects. In this process, they confirmed that the current flow changes depending on the amount and location of oxygen defects. For example, when there are more oxygen defects, the electron pathway widens, and the current flows well, but conversely, when they scatter, the current is blocked. Through this, they succeeded in precisely visualizing that the distribution of oxygen defects within the oxide determines the on/off state of the memory. <Overview of the Research Process. By using one of the SPM modes, C-AFM (Conductive Atomic Force Microscopy), resistive switching corresponding to the electroforming and reset processes is induced in a 10 nm-thick TiO₂ thin film, and the resulting local current variations caused by the applied electric field are observed. Subsequently, at the same location, ESM (Electrochemical Strain Microscopy) and KPFM (Kelvin Probe Force Microscopy) signals are comprehensively analyzed to investigate and interpret the spatial correlation of ion-electronic behaviors that influence the resistive switching phenomenon> This research was not limited to the distribution at a single point but comprehensively analyzed the changes in current flow, the movement of oxygen ions, and the surface potential distribution after applying an electrical signal over a wide area of several square micrometers (µm2). As a result, they clarified that the process of the memory's resistance changing is not solely due to oxygen defects but is also closely intertwined with the movement of electrons (electronic behavior). In particular, the research team confirmed that when oxygen ions are injected during the 'erasing process (reset process)', the memory can stably maintain its off state (high resistance state) for a long time. This is a core principle for increasing the reliability of memory devices and is expected to provide an important clue for the future development of stable, next-generation non-volatile memory. Professor Seungbum Hong of KAIST, who led the research, said, "This is an example that proves we can directly observe the spatial correlation of oxygen defects, ions, and electrons through a multi-modal microscope." He added, "It is expected that this analysis technique will open a new chapter in the research and development of various metal oxide-based next-generation semiconductor devices in the future." <By combining C-AFM and ESM techniques, the correlation between local conductivity and variations in oxygen vacancy concentration after resistive switching is analyzed. After the electroforming process, regions with increased conductivity exhibit an enhancement in the ESM amplitude signal, which can be interpreted as an increase in defect ion concentration. Conversely, after the reset process, regions with reduced conductivity show a corresponding decrease in this signal. Through these observations, it is spatially demonstrated that changes in conductivity and local defect ion concentration after resistive switching exhibit a positive correlation> This research, in which Ph.D. candidate Chaewon Gong from the KAIST Department of Materials Science and Engineering participated as the first author, was published on July 20 in 'ACS Applied Materials and Interfaces', a prestigious academic journal in the field of new materials and chemical engineering published by the American Chemical Society (ACS). ※ Paper Title: Spatially Correlated Oxygen Vacancies, Electrons and Conducting Paths in TiO2 Thin Films This research was carried out with the support of the Ministry of Science and ICT and the National Research Foundation of Korea.

KAIST succeeds in controlling complex altered gen..
< (From left) M.S candidate Insoo Jung, Ph.D candidate Corbin Hopper, Ph.D candidate Seong-Hoon Jang, Ph.D candidate Hyunsoo Yeo, Professor Kwang-Hyun Cho > Previously, research on controlling gene networks has been carried out based on a single stimulus-response of cells. More recently, studies have been proposed to precisely analyze complex gene networks to identify control targets. A KAIST research team has succeeded in developing a universal technology that identifies gene control targets in altered cellular gene networks and restores them. This achievement is expected to be widely applied to new anticancer therapies such as cancer reversibility, drug development, precision medicine, and reprogramming for cell therapy. KAIST (President Kwang Hyung Lee) announced on the 28th of August that Professor Kwang-Hyun Cho’s research team from the Department of Bio and Brain Engineering has developed a technology to systematically identify gene control targets that can restore the altered stimulus-response patterns of cells to normal by using an algebraic approach. The algebraic approach expresses gene networks as mathematical equations and identifies control targets through algebraic computations. The research team represented the complex interactions among genes within a cell as a "logic circuit diagram" (Boolean network). Based on this, they visualized how a cell responds to external stimuli as a "landscape map" (phenotype landscape). < Figure 1. Conceptual diagram of restoring normal stimulus-response patterns represented as phenotype landscapesProfessor Kwang-Hyun Cho’s research team represented the normal stimulus-response patterns of cells as a phenotype landscape and developed a technology to systematically identify control targets that can restore phenotype landscapes damaged by mutations as close to normal as possible. > By applying a mathematical method called the "semi-tensor product,*" they developed a way to quickly and accurately calculate how the overall cellular response would change if a specific gene were controlled. *Semi-tensor product: a method that calculates all possible gene combinations and control effects in a single algebraic formula However, because the key genes that determine actual cellular responses number in the thousands, the calculations are extremely complex. To address this, the research team applied a numerical approximation method (Taylor approximation) to simplify the calculations. In simple terms, they transformed a complex problem into a simpler formula while still yielding nearly identical results. Through this, the team was able to calculate which stable state (attractor) a cell would reach and predict how the cell’s state would change when a particular gene was controlled. As a result, they were able to identify core gene control targets that could restore abnormal cellular responses to states most similar to normal. < Figure 2. Schematic diagram of the process of identifying control targets for restoring normal stimulus-response patternsAfter algebraically analyzing phenotype landscapes in small-scale (A) and large-scale (B) gene networks, the team calculated all attractors to which each network state reconverges after control, and selected= > Professor Cho’s team applied the developed control technology to various gene networks and verified that it can accurately predict gene control targets that restore altered stimulus-response patterns of cells back to normal. In particular, by applying it to bladder cancer cell networks, they identified gene control targets capable of restoring altered responses to normal. They also discovered gene control targets in large-scale distorted gene networks during immune cell differentiation that are capable of restoring normal stimulus-response patterns. This enabled them to solve problems that previously required only approximate searches through lengthy computer simulations in a fast and systematic way. < Figure 3. Accuracy analysis of the developed control technology and comparative validation with existing control technologiesUsing various validated gene networks, the team verified whether the developed control technology could identify control targets with high accuracy (A–B). Control targets identified through the developed technology showed reduced recovery efficiency as the degree of mutation-induced phenotype landscape distortion increased (C). In contrast, other control technologies either failed to identify any control targets at all or suggested targets that were less effective than those identified by the developed technology (D). > Professor Cho said, “This study is evaluated as a core original technology for the development of the Digital Cell Twin model*, which analyzes and controls the phenotype landscape of gene networks that determine cell fate. In the future, it is expected to be widely applicable across the life sciences and medicine, including new anticancer therapies through cancer reversibility, drug development, precision medicine, and reprogramming for cell therapy.” *Digital Cell Twin model: a technology that digitally models the complex reactions occurring within cells, enabling virtual simulations of cellular responses instead of actual experiments KAIST master’s student Insoo Jung, PhD student Corbin Hopper, PhD student Seong-Hoon Jang, and PhD student Hyunsoo Yeo participated in this study. The results were published online on August 22 in Science Advances, an international journal published by the American Association for the Advancement of Science (AAAS). ※ Paper title: “Reverse Control of Biological Networks to Restore Phenotype Landscapes” ※ DOI: https://www.science.org/doi/10.1126/sciadv.adw3995 This research was supported by the Mid-Career Researcher Program and the Basic Research Laboratory Program of the National Research Foundation of Korea, funded by the Ministry of Science and ICT.

KAIST Develops AI that Automatically Detects Defe..
< (From left) Ph.D candidate Jihye Na, Professor Jae-Gil Lee > Recently, defect detection systems using artificial intelligence (AI) sensor data have been installed in smart factory manufacturing sites. However, when the manufacturing process changes due to machine replacement or variations in temperature, pressure, or speed, existing AI models fail to properly understand the new situation and their performance drops sharply. KAIST researchers have developed AI technology that can accurately detect defects even in such situations without retraining, achieving performance improvements up to 9.42%. This achievement is expected to contribute to reducing AI operating costs and expanding applicability in various fields such as smart factories, healthcare devices, and smart cities. KAIST (President Kwang Hyung Lee) announced on the 26th of August that a research team led by Professor Jae-Gil Lee from the School of Computing has developed a new “time-series domain adaptation” technology that allows existing AI models to be utilized without additional defect labeling, even when manufacturing processes or equipment change. Time-series domain adaptation technology enables AI models that handle time-varying data (e.g., temperature changes, machine vibrations, power usage, sensor signals) to maintain stable performance without additional training, even when the training environment (domain) and the actual application environment differ. Professor Lee’s team paid attention to the fact that the core problem of AI models becoming confused by environmental (domain) changes lies not only in differences in data distribution but also in changes in defect occurrence patterns (label distribution) themselves. For example, in semiconductor wafer processes, the ratio of ring-shaped defects and scratch defects may change due to equipment modifications. The research team developed a method for decomposing new process sensor data into three components—trends, non-trends, and frequencies—to analyze their characteristics individually. Just as humans detect anomalies by combining pitch, vibration patterns, and periodic changes in machine sounds, AI was enabled to analyze data from multiple perspectives. In other words, the team developed TA4LS (Time-series domain Adaptation for mitigating Label Shifts) technology, which applies a method of automatically correcting predictions by comparing the results predicted by the existing model with the clustering information of the new process data. Through this, predictions biased toward the defect occurrence patterns of the existing process can be precisely adjusted to match the new process. In particular, this technology is highly practical because it can be easily combined like an additional plug-in module inserted into existing AI systems without requiring separate complex development. That is, regardless of the AI technology currently being used, it can be applied immediately with only simple additional procedures. < Figure 1. Concept diagram of the “TA4LS” technology developed by the research team. Sensor data from a new process is grouped by components (trends, non-trends, and frequencies) according to similar patterns. By comparing these with the defect tendencies predicted by the existing model and automatically correcting mismatches, the technology maintains high performance even when processes change. > In experiments using four benchmark datasets of time-series domain adaptation (i.e., four types of sensor data in which changes had occurred), the research team achieved up to 9.42% improvement in accuracy compared to existing methods.[TT1] Especially when process changes caused large differences in label distribution (e.g., defect occurrence patterns), the AI demonstrated remarkable performance improvement by autonomously correcting and distinguishing such differences. These results proved that the technology can be used more effectively without defects in environments that produce small batches of various products, one of the main advantages of smart factories. Professor Jae-Gil Lee, who supervised the research, said, “This technology solves the retraining problem, which has been the biggest obstacle to the introduction of artificial intelligence in manufacturing. Once commercialized, it will greatly contribute to the spread of smart factories by reducing maintenance costs and improving defect detection rates.” This research was carried out with Jihye Na, a Ph.D. student at KAIST, as the first author, with Youngeun Nam, a Ph.D. student, and Junhyeok Kang, a researcher at LG AI Research, as co-authors. The research results were presented in August 2025 at KDD (the ACM SIGKDD Conference on Knowledge Discovery and Data Mining), the world’s top academic conference in artificial intelligence and data. ※Paper Title: “Mitigating Source Label Dependency in Time-Series Domain Adaptation under Label Shifts” ※DOI: https://doi.org/10.1145/3711896.3737050 This technology was developed as part of the research outcome of the SW Computing Industry Original Technology Development Program’s SW StarLab project (RS-2020-II200862, DB4DL: Development of Highly Available and High-Performance Distributed In-Memory DBMS for Deep Learning), supported by the Ministry of Science and ICT and the Institute for Information & Communications Technology Planning & Evaluation (IITP).

In KAIST, Robots Now Untie Rubber Bands and Inser..
< (From left) M.S candidate Minseok Song, Professor Daehyung Park > The technology that allows robots to handle deformable objects such as wires, clothing, and rubber bands has long been regarded as a key task in the automation of manufacturing and service industries. However, since such deformable objects do not have a fixed shape and their movements are difficult to predict, robots have faced great difficulties in accurately recognizing and manipulating them. KAIST researchers have developed a robot technology that can precisely grasp the state of deformable objects and handle them skillfully, even with incomplete visual information. This achievement is expected to contribute to intelligent automation in various industrial and service fields, including cable and wire assembly, manufacturing that handles soft components, and clothing organization and packaging. KAIST (President Kwang Hyung Lee) announced on the 21st of August that the research team led by Professor Daehyung Park of the School of Computing developed an artificial intelligence technology called “INR-DOM (Implicit Neural-Representation for Deformable Object Manipulation),” which enables robots to skillfully handle objects whose shape continuously changes like elastic bands and which are visually difficult to distinguish. Professor Park’s research team developed a technology that allows robots to completely reconstruct the overall shape of a deformable object from partially observed three-dimensional information and to learn manipulation strategies based on it. Additionally, the team introduced a new two-stage learning framework that combines reinforcement learning and contrastive learning so that robots can efficiently learn specific tasks. The trained controller achieved significantly higher task success rates compared to existing technologies in a simulation environment, and in real robot experiments, it demonstrated a high level of manipulation capability, such as untying complicatedly entangled rubber bands, thereby greatly expanding the applicability of robots in handling deformable objects. Deformable Object Manipulation (DOM) is one of the long-standing challenges in robotics. This is because deformable objects have infinite degrees of freedom, making their movements difficult to predict, and the phenomenon of self-occlusion, in which the object hides parts of itself, makes it difficult for robots to grasp their overall state. To solve these problems, representation methods of deformable object states and control technologies based on reinforcement learning have been widely studied. However, existing representation methods could not accurately represent continuously deforming surfaces or complex three-dimensional structures of deformable objects, and since state representation and reinforcement learning were separated, there was a limitation in constructing a suitable state representation space needed for object manipulation. < Figure 1. (From top) A robotic arm performing a sealing task inserting a rubber ring into a groove, an installation task attaching an O-ring onto a cylinder, and a disentanglement task untying a rubber band tangled between two pillars. INR-DOM accurately grasped the tangled state of the object from partial observation and successfully performed the tasks. > To overcome these limitations, the research team utilized “Implicit Neural Representation.” This technology receives partial three-dimensional information (point cloud*) observed by the robot and reconstructs the overall shape of the object, including unseen parts, as a continuous surface (signed distance function, SDF). This enables robots to imagine and understand the overall shape of the object just like humans. *Point cloud 3D information: a method of representing the three-dimensional shape of an object as a “set of points” on its surface. Furthermore, the research team introduced a two-stage learning framework. In the first stage of pre-training, a model is trained to reconstruct the complete shape from incomplete point cloud data, securing a state representation module that is robust to occlusion and capable of well representing the surfaces of stretching objects. In the second stage of fine-tuning, reinforcement learning and contrastive learning are used together to optimize the control policy and state representation module so that the robot can clearly distinguish subtle differences between the current state and the goal state and efficiently find the optimal action required for task execution. When the INR-DOM technology developed by the research team was mounted on a robot and tested, it showed overwhelmingly higher success rates than the best existing technologies in three complex tasks in a simulation environment: inserting a rubber ring into a groove (sealing), installing an O-ring onto a part (installation), and untying tangled rubber bands (disentanglement). In particular, in the most challenging task, disentanglement, the success rate reached 75%, which was about 49% higher than the best existing technology (ACID, 26%). < Figure 2. INR-DOM goes through a two-stage learning process. In the first stage (pre-training), a model is trained to reconstruct a complete 3D shape from partial point cloud data. In the second stage (fine-tuning), reinforcement learning and contrastive learning are used to efficiently learn manipulation policies optimized for specific tasks. > The research team also verified that INR-DOM technology is applicable in real environments by combining sample-efficient robotic reinforcement learning with INR-DOM and performing reinforcement learning in a real-world environment. As a result, in actual environments, the robot performed insertion, installation, and disentanglement tasks with a success rate of over 90%, and in particular, in the visually difficult bidirectional disentanglement task, it achieved a 25% higher success rate compared to existing image-based reinforcement learning methods, proving that robust manipulation is possible despite visual ambiguity. Minseok Song, a master’s student and first author of this research, stated that “this research has shown the possibility that robots can understand the overall shape of deformable objects even with incomplete information and perform complex manipulation based on that understanding.” He added, “It will greatly contribute to the advancement of robot technology that performs sophisticated tasks in cooperation with humans or in place of humans in various fields such as manufacturing, logistics, and medicine.” This study, with KAIST School of Computing master’s student Minseok Song as first author, was presented at the top international robotics conference, Robotics: Science and Systems (RSS) 2025, held June 21–25 at USC in Los Angeles. ※ Paper title: “Implicit Neural-Representation Learning for Elastic Deformable-Object Manipulations” ※ DOI: https://www.roboticsproceedings.org/ (to be released), currently https://arxiv.org/abs/2505.00500 This research was supported by the Ministry of Science and ICT through the Institute of Information & Communications Technology Planning & Evaluation (IITP)’s projects “Core Software Technology Development for Complex-Intelligence Autonomous Agents” (RS-2024-00336738; Development of Mission Execution Procedure Generation Technology for Autonomous Agents’ Complex Task Autonomy), “Core Technology Development for Human-Centered Artificial Intelligence” (RS-2022-II220311; Goal-Oriented Reinforcement Learning Technology for Multi-Contact Robot Manipulation of Everyday Objects), “Core Computing Technology” (RS-2024-00509279; Global AI Frontier Lab), as well as support from Samsung Electronics. More details can be found at https://inr-dom.github.io.

KAIST Leading the International Standardization o..
< (From left) Seongha Hwang (Ph.D. candidate), Woohyuk Chung (Ph.D. candidate), Professor Jooyoung Lee (School of Computing) > In computer security, random numbers are crucial values that must be unpredictable—such as secret keys or initialization vectors (IVs)—forming the foundation of security systems. To achieve this, deterministic random bit generators (DRBGs) are used, which produce numbers that appear random. However, existing DRBGs had limitations in both security (unpredictability against hacking) and output speed. KAIST researchers have developed a DRBG that theoretically achieves the highest possible level of security through a new proof technique, while maximizing speed by parallelizing its structure. This enables safe and ultra-fast random number generation applicable from IoT devices to large-scale servers. KAIST (President Kwang Hyung Lee) announced on the 20th of August that a research team led by Professor Jooyoung Lee from the School of Computing has established a new theoretical framework for analyzing the security of permutation*-based deterministic random bit generators (DRBG, Deterministic Random Bits Generator) and has designed a DRBG that achieves optimal efficiency. *Permutation: The process of shuffling bits or bytes by changing their order, allowing bidirectional conversion (the shuffled data can be restored to its original state). Deterministic random bit generators create unpredictable random numbers from entropy sources (random data obtained from the environment) using basic cryptographic operations such as block ciphers*, hash bits— an improvement of approximately 50% compared to existing proofs. They also proved that this value is the theoretical maximum achievable. The research team also designed POSDRBG (Parallel Output Sponge-based DRBG) to address the output efficiency limitation of the existing sponge structure caused by its serial (single-line) processing. The newly proposed parallel structure processes multiple streams simultaneously, thereby achieving the maximum efficiency possible for permutation-based DRBGs. Professor Jooyoung Lee stated, “POSDRBG is a new deterministic random bit generator that improves both random number generation speed and security, making it applicable from small IoT devices to large-scale servers. This research is expected to positively influence the ongoing revision of the international DRBG standard SP800-90A*, leading to the formal inclusion of permutation-based DRBGs.” *SP800-90A: An international standard document established by the U.S. NIST (National Institute of Standards and Technology), defining the design and operational criteria for DRBGs used in cryptographic systems. Until now, permutation-based DRBGs have not been included in the standard. This research, with Woohyuk Chung (KAIST, first author), Seongha Hwang (KAIST), Hwigyeom Kim (Samsung Electronics), and Jooyoung Lee (KAIST, corresponding author), will be presented in August at CRYPTO (the Annual International Cryptology Conference), the world’s top academic conference in cryptology. Article title: “Enhancing Provable Security and Efficiency of Permutation-Based DRBGs“ DOI: https://doi.org/10.1007/978-3-032-01901-1_15 This research was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP). < Figure 1. Sponge structure that outputs sequence Zi using permutation function P > The random number output function of the existing Sponge-DRBG uses a sponge structure that directly connects the permutation P. For reference, all existing permutation-function-based DRBGs have this sponge structure. In the sponge structure, among the n-bit inputs of P, only the upper r bits are used as the output Z. Therefore, the output efficiency is always limited to r/n. < Figure 2. Output structure of POSDRBG > In this study, the random number output function of POSDRBG was designed to allow parallel computation, and all n-bit outputs of the permutation function P become random numbers Z. Therefore, it has an output efficiency of 1.

KAIST develops world’s most sensitive light-powere..
<(From left) Ph.D candidate Jaeha Hwang, Ph.D candidate Jungi Song ,Professor Kayoung Lee from Electrical Engineering> Silicon semiconductors used in existing photodetectors have low light responsivity, and the two-dimensional semiconductor MoS₂ (molybdenum disulfide) is so thin that doping processes to control its electrical properties are difficult, limiting the realization of high-performance photodetectors. The KAIST research team has overcome this technical limitation and developed the world’s highest-performing self-powered photodetector, which operates without electricity in environments with a light source. This paves the way for an era where precise sensing is possible without batteries in wearable devices, biosignal monitoring, IoT devices, autonomous vehicles, and robots, as long as a light source is present. KAIST (President Kwang Hyung Lee) announced on the 14th of August that Professor Kayoung Lee’s research team from the School of Electrical Engineering has developed a self-powered photodetector that operates without external power supply. This sensor demonstrated a sensitivity up to 20 times higher than existing products, marking the highest performance level among comparable technologies reported to date. Professor Kayoung Lee’s team fabricated a “PN junction structure” photodetector capable of generating electrical signals on its own in environments with light, even without an electrical energy supply, by introducing a “van der Waals bottom electrode” that makes semiconductors extremely sensitive to electrical signals without doping. First, a “PN junction” is a structure formed by joining p-type (hole-rich) and n-type (electron-rich) materials in a semiconductor. This structure causes current to flow in one direction when exposed to light, making it a key component in photodetectors and solar cells. Normally, to create a proper PN junction, a process called “doping” is required, which involves deliberately introducing impurities into the semiconductor to alter its electrical properties. However, two-dimensional semiconductors such as MoS₂ are only a few atoms thick, so doping in the conventional way can damage the structure or reduce performance, making it difficult to create an ideal PN junction. To overcome these limitations and maximize device performance, the research team designed a new device structure incorporating two key technologies: the “van der Waals electrode” and the “partial gate.” The “partial gate” structure applies an electrical signal only to part of the two-dimensional semiconductor, controlling one side to behave like p-type and the other like n-type. This allows the device to function electrically like a PN junction without doping. Furthermore, considering that conventional metal electrodes can chemically bond strongly to the semiconductor and damage its lattice structure, the “van der Waals bottom electrode” was attached gently using van der Waals forces. This preserved the original structure of the two-dimensional semiconductor while ensuring effective electrical signal transfer. This innovative approach secured both structural stability and electrical performance, enabling the realization of a PN junction in thin two-dimensional semiconductors without damaging their structure. Thanks to this innovation, the team succeeded in implementing a high-performance PN junction without doping. The device can generate electrical signals with extreme sensitivity as long as there is light, even without an external power source. Its light detection sensitivity (responsivity) exceeds 21 A/W, more than 20 times higher than powered conventional sensors, 10 times higher than silicon-based self-powered sensors, and over twice as high as existing MoS₂ sensors. This level of sensitivity means it can be applied immediately to high-precision sensors capable of detecting biosignals or operating in dark environments. Professor Kayoung Lee stated that they “have achieved a level of sensitivity unimaginable in silicon sensors, and although two-dimensional semiconductors are too thin for conventional doping processes, [they] succeeded in implementing a PN junction that controls electrical flow without doping.” She added, “This technology can be used not only in sensors but also in key components that control electricity inside smartphones and electronic devices, providing a foundation for miniaturization and self-powered operation of next-generation electronics.” <Jaeha Hwang, Jungi Song, Experimnet in Porgress> This research, with doctoral students Jaeha Hwang and Jungi Song as co-first authors, was published online on July 26 in Advanced Functional Materials (IF 19), a leading journal in materials science. ※ Paper title: Gated PN Junction in Ambipolar MoS₂ for Superior Self-Powered Photodetection ※ DOI: https://advanced.onlinelibrary.wiley.com/doi/10.1002/adfm.202510113 Meanwhile, this work was supported by the National Research Foundation of Korea, the Korea Basic Science Institute, Samsung Electronics, and the Korea Institute for Advancement of Technology.