Research in our group lies at the intersection of high performance computing (HPC) and machine learning (ML). Our research group has access to the largest supercomputers in the world, and uses this rich computational resource to perform state-of-the-art distributed deep learning problems. Research is done at the application level (scaling laws of neural networks), algorithm level (fast linear algebra methods), and low-level implementation (CUDA/PTX, C++).

News

2022.07.28 The work by Hiroyuki Otomo (3nd year PhD) received the Yamashita Memorial Research Award
2022.06.13 The work by Qianxiang Ma (2nd year PhD) and Sameer Deshmukh (3rd year PhD) was accepted to SC22.
2022.05.29 The work by Muhammad Ridwan Apriansyah (2nd year PhD) was accepted to ACM TOMS.
2022.03.04 The work by Shukai Nakamura (4th year Bachelor) won the Student Encouragement Award at the 84th National Convention of IPSJ
2022.03.03 The work by Edgar Martinez Noriega (Postdoc), Sora Takashima (1st year Master), Xinyu Zhang (1st year Master) in collaboration with AIST was accepted to CVPR2022.
2022.02.28 The work by Hiroyuki Ootomo (2nd year PhD) was accepted to IJHPCA.
2022.02.01 The work by Hana Hoshino (2nd year Master) was accepted to ICRA2022.
2021.09.17 CREST Project with Emtiyaz Khan at RIKEN AIP was awarded
2021.07.23 The work by Shun Iwase (graduated) was accepted to ICCV2021.
2021.04.13 The work by Hikaru Nakata (graduated) was accepted to CVPR2021 Workshop CLVISION
more info.

About Us.

Tokyo Tech, GSIC, Advanced Computing Research Division, Advanced Applications of High-Performance Computing Group

» More Info.

Hierarchical Low-Rank Approximation

Dense matrices appear in many computational applications such as boundary integral methods for solving homogeneous partial differential .....

» More Info.

Application to Deep Learning

Deep learning does not require very high precision, and this fact is exploited by the recent low-precision hardware from NVIDIA and Google...

» More Info.