Installing the Titanium Compiler This file describes the installation process for the Titanium compiler. If you don't already have it, you need to get the source code for the compiler and runtime system, which is available off of the Titanium home page at: http://titanium.cs.berkeley.edu/ Most likely, you will want the latest release. The distribution consists of a rougly 6 Megabyte gzip'd tar archive. Note that things are slightly different if you are checking Titanium out of PRCS, which is the source code control system used for Titanium development. The instructions below assume that you already have a packaged source distribution in hand. Once you have the distribution, unpack it in a directory of your choosing. There is no "Makefile", but there is a "configure" script. (If you have ever compiled any GNU software in the past, such as Emacs or gcc, then this should be familiar.) You run the "configure" script, which finds out what kind of machine it is running on and generates a suitable "Makefile". You should just be able to run "configure". If you want to play around with install directories or optional compiler components, use "configure --help" to get a list of options. (See below for contact information if you need more details.) Note that if you have one of the following system-area networks, you'll likely want to pass the corresponding configure switches to enable the appropriate GASNet-based backend support: IBM SP: --enable-lapi Quadrics: --enable-elan Myrinet: --enable-gm (also need to set env. vars GM_INCLUDE and GM_LIB to the appropriate paths for your Myrinet-GM installation) In addition to the source code, there are a few other things you will need. The "tcbuild" wrapper script requires Perl 5.004 or later, and the "configure" script will complain if it cannot find that. Building the compiler also requires some reasonable version of GNU make and a working C++ compiler (we recommend either g++ or egcs-g++, but others such as Kai C++ have successfully been used). You can control the C and C++ compilers selected by setting CC and CXX in your environment before configure. Finally, there are several parallel backends available. Which ones get built depend upon what we can find on your system: - The "sequential" backend is always available. It only supports purely sequential code. - If the configure script finds the POSIX thread library (libpthread), then the "smp" backend will be available. This uses POSIX threads for parallelism, making it relatively portable. It is also capable of simulating more parallelism than the hardware provides, which means that you can run eight-way tasks on one CPU (good for testing/debugging). - If the configure script finds MPI 1.1 support on the machine, then the "mpi-cluster-uniprocess" and "mpi-cluster-smp" backends will be available. These compile Titanium applications into MPI-based C code, which is then compiled to native code and run like any other natively written MPI application. This provides a very portable yet high-performance way to run distributed-memory Titanium applications. - If the configure script finds UDP support on the machine, then the "udp-cluster-uniprocess" and "udp-cluster-smp" backends will be available. These provide a portable way to run distributed-memory applications for testing and debugging purposes, or for clusters lacking high-performance networking hardware. They should work on any cluster that provides a standard TCP/IP stack and sshd. - The GASNet backends provide high-performance native support for the following system-area networks: IBM SP: gasnet-lapi-uni, gasnet-lapi-smp Quadrics: gasnet-elan-uni, gasnet-elan-smp Myrinet: gasnet-gm-uni, gasnet-gm-smp In order to enable these you'll need to pass explicit configure switches (see above). - If the configure script finds the AM-2 runtime library (libam2) then the "now-cluster-uniprocess" backend will be available. This uses a Berkeley-style Network of Workstations (NOW) for parallelism, with Active Messages II providing communications. This is one way to run Titanium in distributed-memory environments. The configure script will look for all of the above and build those backends that seem appropriate (the configure output indicates which backends were detected and selected). It should all just work automatically, however there are configure options for suppressing certain backends if you wish to do so. Once the configure script finishes, type "make" (or "gmake") to build the compiler. This will take a while. After the compiler itself is built, you can precompile the standard class libraries. These include such things as java.lang.String, java.io.FileInputStream, and so on. Use the following commands: % cd runtime/backend % make tlib Each backend has two precompiled libraries: one built for debugging, and one built optimized. So this will take a rather long time. For example, on one of the Berkeley machines, the total time from unpack to all prebuilt libraries was just under two hours. You now have the Titanium compiler and the prebuilt standard class libraries. At this point, you have two options. You can use the Titanium compiler in place, right out of the build tree. This is handy if you are just trying things out, or if you are the only one using it. Or you can actually install everything so that other people can use Titanium as well. To use the compiler in place, use the "tcbuild" script in the "tcbuild" subdirectory. See the Titanium home page for documentation on how to run the compiler using "tcbuild." To install the compiler for others to use, type "make install" (or "gmake install"). By default, the various components of the compiler and runtime will be installed under "/usr/local". To install it elsewhere, consider using "--prefix" or other related options when you run the configure script. If problems arise, or you have any further questions, please send mail to titanium-devel@cs.berkeley.edu. Please send as much detail as possible, even if it seems irrelevant.