Skip Navigation

Friday August 2nd, 2013

OIT > About OIT > OIT Faculty Advisory Committee > Appendix C

Appendix C
UCI Research Cyberinfrastructure Draft Proposal

“Research cyberinfastructure" (CI) is the phrase that has been coined to describe the computing cycles, high-speed networking, massive storage, professional support, software and related technology elements required by modern scientific research.  The term also implies a close integration of cycles, storage and networking, and includes such concepts as “grid computing.”

UCI research cyberinfastructure currently includes departmental and research group computing facilities and support staff, central services in OIT and elsewhere, and the campus network.  OIT has two full-time discipline-oriented computing specialists, and provides data center, system administration, and specialized network services in support of research.  OIT also operates the campus “Medium Performance Computing” (MPC) compute cluster, which consists of about 130 permanent processor nodes purchased collaboratively by OIT and researchers.

OIT has been consulting with faculty, school computing directors, and our UC colleagues to review cyberinfastructure needs.  Although the multi-million dollar central computing systems of the past are not an important part of the equation at this time, it is clear that additional investment must be made in UCI’s research computing environment.  This is one component of providing the facilities and institutional support required to attract the best faculty to the campus.  It will be needed to reach UCI’s goal of enhancing its standing among the best comprehensive research universities in the country.

UCI’s cyberinfastructure plan will take into account the fact that large research projects are able to afford the rich computing environments they require, and that small research projects can take advantage of relatively inexpensive, but high-performance, desktop computing and storage.  It will focus on cyberinfastructure elements that are best provided or coordinated centrally, such as high-speed networking and grid computing, and on those medium performance services that are not currently well addressed.

UCI’s research cyberinfrastructure elements will include:

  1. Programming support and consulting:  Our survey of faculty indicates that multi-disciplinary programming support is a recurring, unmet need that limits research productivity.  We envision adding discipline-specific professional consultants who would provide scientific programming advice and assistance, database design, and other services.  The consultants would coordinate the efforts of graduate-student programmers who could be made available to research projects on a recharge basis.

  2. Data Storage:  Researchers would benefit from having access to a well-managed, large-capacity, hierarchical, data-storage pool.  It would consist of several interacting, integrated sub-systems; a layered approach, ranging from slow to high-throughput devices and associated file backup.  The goal would be to administer this storage in a way that individual groups could fund additional dedicated storage to support their specific research efforts.

  3. Central Compute Cluster Support:  There are several modes of cluster support that should be included in our plans.  First, one or more shared clusters should be operated in a manner similar to the current MPC cluster.  The goal is to provide an environment to which faculty can add processors to serve their compute needs without having to provide physical facilities or system administration.  Faculty and campus contributions to a shared processor pool will ensure there are general access compute cycles available.  An annual funding stream would be identified to cover the cost of system administration staff to operate the equipment and support researcher use, as well as to regularly replace central processors with evolving technology.  System Administration staff would be experts on grid computing and the central clusters would form the cornerstone of a new “UCI Grid.”

    An alternate mode of support would be fee-based system administration of computer clusters dedicated to individual research projects.  The use of standardized software and hardware platforms would allow cost-effective support and easy integration into the UCI Grid.

  4. Academic Data Center (ADC):  The ADC is a machine room operated OIT that is available for housing compute clusters owned by research groups and other entities.  The ADC provides power, air-conditioning, and security that is not cost-effective to provide individually to clusters housed by individual research groups.

  5. Very High-Speed Network Connectivity and/or Optical Links:  Research cyberinfrastructure includes high-speed network connectivity between compute, storage, and visualization nodes.  Much of the need will be accommodated by continuing efforts to maintain and enhance connectivity across all buildings.  Some research efforts may require dedicated bandwidth outside of the normal UCInet capability.  This is an area where more dialog is needed.

 

Additional discussion about research computing and networking is required before consensus is reached concerning exact cyberinfrastructure requirements.  However, strawman scenarios and cost estimates are presented below as start-up scenarios to address what has been discussed thus far:

CI Element

Cost

Scenario

Programming / consulting

$123k

Create 1 FTE discipline-specific consultant and 2 .5 FTE graduate student programmer positions.

Data Storage

$70k

In collaboration with Biological Sciences, create a shared, scalable disk array.  Acquire an initial 16 TeraByte system for campus-wide use that can be expanded with additional dedicated research project space.

Cluster Support

$100k

Provide an annual budget to support 1 FTE system administrator with grid computing expertise to add to existing OIT cluster support staff.  Include enough funding to replace or add approximately 16 high-end processors each year as the cornerstone of a new ‘UCI Grid.”

Academic Data Center

??

Expand ADC floor space and/or create satellite ADCs.

Network

??

More discussion is needed.