Older Student Research Projects
Older Student Research Opportunities (NOT ACTIVE PROJECTS)
These research projects are not currently active but are provided as examples of student research projects from 2016-2019.
CO-DESIGN FOR HPC APPLICATIONS
Traditional High-performance computing (HPC) is primarily focused on running large, scientific codes in parallel across large computer clusters called supercomputers. This project looks at a language to parallelize these scientific simulations called OpenMP. We are interested to map some small benchmark codes to a new version of the OpenMP language and then to perform “co-design” by running them on top of an architectural simulator like gem5. The co-design process means that we look at how the application runs on a cluster and also simulate how changes to the underlying hardware might affect the code.
- What’s the goal? This research aims map some HPC mini-apps to a new version of OpenMP and simulate them on a full-system architectural simulator.
- Required skills A basic understanding of using Linux, ssh, and accessing X-based software remotely as well as some general experience with programming in C or C++.
- Desired skills Experience with OpenMP or having taken one of the HPC or parallel programming classes (CX 4220) or experience with computer architecture simulators (CS 4290).
- Student Receives: Course credit; Possibility of paid position if skill set and results fit longer-term project.
- Future opportunities: This work is of relevance to national labs working on the “Exascale Computing Project” and can lead to opportunities in either parallel programming or further architectural simulation projects.
APPLYING 3D STACKED MEMORIES TO MACHINE LEARNING
Machine learning has grown by leaps and bounds, especially with the design of new networks for object classification and detection that run efficiently on Graphics Processing Units. However, we would like to look at future platforms for machine learning, including Field Programmable Gate Arrays (FPGAs) and new types of memory technologies to see how they can make machine learning more efficient.
- What’s the goal? This research aims to map a simple machine learning framework to an FPGA platform, preferably using OpenCL.
- Required skills A basic understanding of using Linux, ssh, and accessing X-based software remotely as well as some general experience with programming in C or C++.
- Desired skills In addition to the above, it would be great to have some experience with using either PyTorch or the Caffe machine learning frameworks. Having taken any online course material like Udacity’s Intro to Machine Learning or Stanford’s CS231n is also a big plus. Experience with FPGA design and/or high-level synthesis is also desired.
- Student Receives: Course credit; Possibility of paid position if skill set and results fit longer-term project.
- Future opportunities:
- The long-term goal for this project would be to evaluate different algorithms on an FPGA and Hybrid Memory Cube platform using a framework like Caffe 1 or 2.
For context, here are ongoing research projects that have students but are available for senior students with related skills
ALGORITHMS-BASED RESEARCH WITH OPENCL (2016 PROJECT)
FPGAs, like discrete GPUs and Xeon Phi cards, are devices that are capable of running parallel algorithms at a much higher level of scalability than CPUs due to the possibility of launching larger numbers of application “threads” in the FPGA fabric. A previous impediment to using FPGAs has been the extreme difficulty in programming these devices using low-level languages like Verilog and VHDL and verification of designs. Device vendors now support using the OpenCL standard to map high-level algorithms onto FPGA hardware, with improved capabilities for debugging and programmability.
- What’s the goal? This research aims to map basic HPC-related primitives like matrix-multiplication, FFT, and scatter/gather to OpenCL and then test these implementations on both FPGAs and GPUs.
- Required skills A basic understanding of using Linux, ssh, and accessing X-based software remotely as well as some general experience with programming in C or C++.
- Desired skills In addition to the above, it would be great to have some experience with one or more GPU programming models like CUDA, OpenCL, or OpenACC. Experience with Altera FPGA software and tools is a big plus.
- Student Receives: Course credit; Possibility of paid position if skill set and results fit longer-term project.
- Future opportunities: This research area can lead to a variety of follow-on projects, including (1) migrating more complicated algorithms like Sparse Matrix solvers to optimized FPGA implementations that use OpenCL and hybrid memory cube (2) Evaluating power/performance trade-offs for implementing algorithms on FPGA vs. GPU or Xeon Phi, or (3) other student-driven projects in the space of programmable accelerators and algorithms.
ARCHITECTURE-BASED RESEARCH WITH HYBRID MEMORY CUBE (2016 PROJECT)
Hybrid Memory Cube (HMC) is a new standard for 3D stacked memory that promises to offer higher bandwidth and lower power requirements than traditional DRAM chips. Currently, FPGA based devices are the only platform in which to test this new hardware, and we envision building an infrastructure around the Pico Computing EX-750 FPGA platform, which contains one 4 GB HMC chip and one Xilinx Ultrascale part
- What’s the goal? This research is initially focused on creating an FPGA-based OpenCL framework to interact with hybrid memory cubes as a research platform.
- Required skills A basic understanding of using Linux, ssh, and accessing X-based software remotely. General experience with programming in C or C++ and at least some exposure to Verilog, VHDL, or SystemVerilog.
- Desired skills More extensive experience with Verilog, VHDL, or SystemVerilog and Altera FPGA software and tools.
- Student Receives: Course credit; Consideration for upcoming funding if skill sets and results are a match.
- Future opportunities: This research area is just opening up, but we envision a large set of architectural-related projects that utilize HMC including (1) the design of new memory controller interfaces and architectures for interacting with HMC (2) studies that characterize the performance improvements of applications utilizing HMC, or (3) codesign of FPGA- and HMC-driven enterprise applications in areas such as data analytics.