A paper entitled “A Low Noise Single-Slope ADC with Signal-Dependent Multiple Sampling Technique” has been accepted to International Image Sensor Workshop (IISW) in this year. The paper is authored by Sanguk Lee, Seunghyun Lee, Bumjun Kim, and Prof. Seong-Jin Kim. IISW is one of the most prestigious conferences in the field of CMOS imagers and

Prof. Seong-Jin Kim has been invited to serve as a technical program committee (TPC) member at the IEEE International Solid-State Circuit Conference (ISSCC). As the single most prestigious conference in the solid-state circuit society, ISSCC is also called “the Olympic in the field of the semiconductor circuit,” and it is held in San Francisco every

Sam H. Noh, UNIST Professor of Computer Science and Engineering, has been elected to serve as one of two co-chairs for the USENIX ’18th USENIX Conference on File and Storage Technologies’ (FAST ’20). This can be viewed as a positive sign that South Korean data storage technologies, including flash memory are world-class academically. USENIX, founded in 1975, is the Advanced

[Welcome] New faculty member

February 22, 2019

Prof. Jooyong Yi has joined the School of ECE (CSE Track) as of Feb.13, 2019. Greeting from professor I am truly delighted to join UNIST as a faculty member, after working abroad for the last decade or so. After receiving a PhD degree from Aarhus University in Denmark, I worked in various places such as

[Welcome] New faculty member

February 22, 2019

Prof.Kwang In Kim has joined the School of ECE (CSE Track) as of Feb. 13, 2019. Greetings from professor I am interested in advancing the understanding of how we can explore, make sense of, and interact with visual data. I contribute to this endeavor by exploiting and developing new techniques in machine learning, computer vision,

A research paper entitled “CoPart: Coordinated Partitioning of Last-Level Cache and Memory Bandwidth for Fairness-Aware Workload Consolidation on Commodity Servers” has been accepted for publication at EuroSys’19. The paper is co-authored by Jinsu Park, Seongbeom Park, and Prof. Woongki Baek at Computer Architecture and Systems Lab. (CASL), CSE, UNIST. This work proposes CoPart, coordinated partitioning

Two papers have been accepted for publication at FAST 2019. One paper is co-authored by Olzhas Kaiyrakhmet, Songyi Lee, Prof. Beomseok Nam (Sungkyunkwan Univ.), Prof. Sam H. Noh, and Prof. Young-ri Choi, and the other paper is co-authored by Moohyeon Nam, Hokeun Cha (Sungkyunkwan Univ.), Prof. Young-ri Choi,  Prof. Sam H. Noh, and Prof. Beomseok

Time : Jan. 14(Mon), 14:00~ Location : 104 E101 Speaker : Seo Jin Park Abstract Since the advent of the Internet, applications have started using millions of people’s data which is beyond the capacity of a single machine. In response to the need for large-scale systems, the systems community focused on largely-scalable distributed systems, such

December 5, 2018 / 16:00 ~ 17:15   Speaker : Prof. Jongho Lee   Abstract   뇌는 1000억개의 뉴런과 100만 km의 뉴런간의 연결로 구성되어 있으며 몸무게의 2%정도를 차지하지만 신체 에너지의 20%를 사용하는 복잡하고 다이나믹한 조직이다. 본 세미나에서는 이와 같은 뇌의 기본적인 특성에 대해서 설명하고 뇌를 연구할 수 있는 방법론중 중요한 요소를 이루는 자기공명장치를 활용한 뇌영상법에

  Abstract In this talk, I will introduce both completed work and work in progress in the intersection of systems and machine learning. After (very) briefly discussing projects on reducing tail latency in interactive services and intelligent real-time data analytics, I will dive into the main body of the talk, which is toward building GPU

Abstract Despite the advances in understandings and mechanisms for securing computer systems, researchers are reporting new forms of attacks every year, and production systems often do not adopt all the state-of-the-art techniques. The fundamental reason behind the insecurity is the ever-increasing complexity of the systems, which is unavoidable because of programmability and time to market.

Professor Myeongjae Jeon’s paper titled “Tiresias: A GPU Cluster Manager for Distributed Deep Learning” has been accepted to NSDI 2019. This work was done in collaboration with researchers at University of Michigan, Microsoft Research, Alibaba, and Bytedance.   This paper presents Tiresias, a GPU cluster resource manager tailored for distributed deep learning training, which efficiently

Top