Our research group is working on various applications of large-scale distributed parallelism using supercomputers. In recent years, deep learning has been the focal point of attention in image recognition, natural language processing, and reinforcement learning, etc. The scale of deep neural nets used in these fields is increasing exponentially, and training these networks is becoming impossible without the use of supercomputers. However, simply running existing deep learning frameworks on supercomputers will not immediately improve the speed of the training nor the accuracy of the resulting model. It is necessary to solve the issues specific to large-scale distributed training one by one before we can investigate the scaling laws of deep neural networks. Scientific computing, which have been performed on supercomputers for a long time, also require continuous research on algorithms and implementation methods to match the ever-changing computer architectures. Furthermore, since the performance of computers continues to improve exponentially due to Moore's Law, the calculations performed on today's supercomputers will be able to be performed on a local desktop computer in 10 years. In other words, solving the problems on today's supercomputers is equivalent to solving the research problems of 10 years from now in advance.


In our laboratory, we have access to some of the largest supercomputers in Japan, including TSUBAME at Tokyo Institute of Technology, Wisteria BDEC at the University of Tokyo, ABCI at AIST, and Fugaku at RIKEN. In addition, by actively using the Grand Challenge System, which allows us to have exclusive access to the entire system of these supercomputers, we are able to use one of the largest amounts of computing resources among academic research groups. In addition, by concluding collaborative research agreements (MOUs) [https://adac.ornl.gov] with 11 major supercomputer centers around the world, we have access to Summit at ORNL, one of the world's largest supercomputers, and the Piz Daint at ETH/CSCS. Computations that would take weeks in a normal research groups’ computing environment can be performed in a few hours in our group.


Our expertise in large-scale computation on supercomputers and our vast amount of computing resources are useful in many research fields, including deep learning and scientific computing. Currently, we are participating in many joint research projects with domestic and international research institutions and companies, both within and outside the university. Within our university, we are collaborating with Shinoda Group on Neural Architecture Search, with Inoue Group on continuous learning and large-scale prior learning, with Sato Group on self-supervised learning and deep distance learning for automated driving, and with Kanezaki Group on inverse reinforcement learning. Extrenally, we are collaborating with the Khan Group at RIKEN AIP on Bayesian deep learning, with the Matsuoka Group at RIKEN R-CCS on large-scale deep learning on Fugaku, with the computer vision research team at AIST on fractal-based prior learning, and with the social intelligence research team on Visual SLAM. Outside Japan, we are collaborating with Jack Dongarra's group at the University of Tennessee on hierarchical low-rank approximation, with Torsten Hoefler's group at ETH on techniques for accelerating deep learning, and with Kris Kitani's group at CMU on 3D pose estimation of objects. This means that you can choose from a wide range of research topics, or if you want to find a new research topic on your own, you are not limited to the expertise of your supervisor alone, but can receive appropriate support through our collaborators.




The choice between "deciding your own research topic" and "being given a research topic” is not binary. As you gain more experience, you will be able to set your own problems with larger granularity. People can grow the fastest in an environment where the granularity of what they can decide for themselves (degree of freedom) is set appropriately. The accumulation of a sense of accomplishment from solving small problems also leads to increased motivation. What is given as exercises in lectures up to the third year of undergraduate school are problems that have been examined over the years and carefully selected at the finest granularity. When you join a research group, you will be able to set and solve more coarse-grained tasks on your own as the year progresses. One of the advantages of continuing an existing research topic in the group is that the initial stage of research is accelerated through the guidance of your predecessor, it also helps the senior members because teaching junior members is the best way to learn. No matter how big the obstacle is, you can create incremental steps to overcome it. Such steps will become an asset for countless successors who will follow in your footsteps. I believe that an academic paper should essentially be a summary of such findings.


Due to the recent publish or perish culture, there is an increasing number of papers that exaggerate the advantages and significance of the proposed method. This is a systemic problem since papers written in this way have a higher chance of being accepted by a journal or conference. However, this creates a situation where finding truly important information from all the noise becomes increasingly difficult. Up to the undergraduate level, we did not have to pay too much attention to the signal/noise ratio of the information we were given, since all the textbooks were a product of decades of distilled information. At the graduate level and onwards, the information is less distilled and all that is published is not necessarily true or important. Therefore, it becomes increasingly important to filter the noise from all the information you obtain and capture the essence and fundamental concepts within them. These judgements should be made based on whether it is consistent with everything you have learned so far, and not be influenced by authority of the author or the institution which publishes the material. Superficially reading a lot of papers will not help you in this regard.


One of the most common misconceptions among students is that "the topic they wanted to work on has already been done”. When you are just starting your research, your ideas are still vague and have a coarse granularity. If you look at any idea from a coarse-grained perspective, you can almost always find existing studies. What is important for beginners in research is to find (or have your advisor find) a research theme that is fine-grained enough to differentiate it from existing research, and then do your best to work on it. If you do this, you will gradually be able to find novelty and superiority at a coarser granularity as you advance. In our laboratory, we try to set assignments of an appropriate granularity for each student. Of course, if a student has the ability to find a completely new theme on his or her own with coarse granularity from the beginning, we encourage them to do so.



Our laboratory encourages students to study abroad. We can introduce you to long-term and short-term study abroad opportunities while you are in school, and since we have several alumni who have gone on to graduate school abroad, we have accumulated know-how on how to prepare for such opportunities. The following is a list of past students' destinations for study abroad and higher education.

  • A*STAR (Singapore) 1 student
  • Carnegie Mellon University (USA) 2 students
  • University of Montreal (Canada) 1 student
  • University of Tennessee (USA) 1 student


Internships are not just a part of job-hunting activities, but we actively encourage them because in many cases they lead to increased motivation for research through practical experience at companies and research institutes. In addition, all of the students who have gone on to PhD programs in our group have also experienced multiple internships during their master's degree, and we hope that they will choose to go on to higher education for positive reasons after seeing the advantages of top companies. Many of the students who have belonged to our group so far have experienced multiple internships, and we have a network of internship sites that our students have cultivated. The following is a list of past student internships. (in alphabetical order)

  • AIST
  • Axon, Inc.
  • CyberAgent, Inc.
  • Fixstars Corporation
  • Future Corporation
  • Google
  • IBM Research Tokyo.
  • Livesense Inc.
  • Mercari Inc.
  • Nagase Brothers Inc.
  • Nefrok
  • NextSilicon
  • Nomura Research Institute, Ltd.
  • Panasonic Corporation
  • Preferred Networks, Inc.
  • Quansight Inc.
  • Sony Corporation
  • SORACOM Inc.
  • Team Lab Inc.
  • Techouse, Inc.
  • Telexistence, Inc.
  • Yahoo! Japan


As a laboratory in the field of high performance computing, we have a large number of students who compete in programming competitions every year, achieving good results in various fun events such as AtCoder, Kaggle, and ICPC. Our research in high-performance computing directly benefits from the implementation skills cultivated in the competition.