Installing the Titanium Compiler This file describes the installation process for the Titanium compiler. If you don't already have it, you need to get the source code for the compiler and runtime system, which is available off of the Titanium home page at: http://titanium.cs.berkeley.edu/ Most likely, you will want the latest release. The distribution consists of a rougly 6 Megabyte gzip'd tar archive. Requirements ============ In addition to the source code and a working C compiler, there are a few other things you will need. The "tcbuild" wrapper script requires Perl 5.004 or later, and the "configure" script will complain if it cannot find that. Building the compiler also requires some reasonable version of GNU make and a working C++ compiler (we recommend g++ 2.95 or later, but others such as IBM xlC, HP C++, Intel C++ and MIPS C++ have successfully been used). You can control the C and C++ compilers selected by setting CC and CXX in your environment before configure. Configuring =========== Once you have the distribution, unpack it in a directory of your choosing. There is no "Makefile", but there is a "configure" script. (If you have ever compiled any GNU software in the past, such as Emacs or gcc, then this should be familiar.) You run the "configure" script, which finds out what kind of machine it is running on and generates a suitable "Makefile". Note the following instructions assume the machine you're using for compiling is identical to the compute nodes which will run application jobs -- if this is not the case, see the cross-compilation instructions at the end of this document. You should just be able to run "configure". If you want to play around with install directories or optional compiler components, use "configure --help" to get a list of options. (See below for contact information if you need more details.) Note that if you have one of the following system-area networks, you'll likely want to pass the corresponding configure switches to enable the appropriate GASNet-based backend support: IBM SP: --enable-lapi Quadrics: --enable-elan Myrinet: --enable-gm Infiniband: --enable-vapi Cray X1 / SGI Altix: --enable-shmem In each case you may also need to set additional environment variables to indicate the location of your network drivers if the configure script fails to automatically detect them (you will be prompted by an error message if that's the case). If you want to keep the build files separate from the source files (*HIGHLY* recommended for compiler developers) then you should run the configure script from an empty working directory (eg a sibling directory to the source directory). The current working directory when configure is run will become the build directory, and will contain the Makefiles and all generated object files and executables. PRCS snapshots ============== If your copy of Titanium is a snapshot from the PRCS source code repository, you may need to run the ./Bootstrap script to create the configure script. This is *not* required or recommended if you've downloaded an official packaged distribution. Backends ======== There are several compilation backends available that provide different capabilities to built applications. Which backends get built depend upon what we can find on your system: - The "sequential" backend is always available. It only supports purely sequential code. - If the configure script finds the POSIX thread library (libpthread), then the "smp" backend will be available. This uses POSIX threads for parallelism, making it relatively portable. It is also capable of simulating more parallelism than the hardware provides, which means that you can run eight-way tasks on one CPU (good for testing/debugging). - If the configure script finds MPI 1.1 support on the machine, then the "mpi-cluster-uniprocess" and "mpi-cluster-smp" backends will be available. These compile Titanium applications into MPI-based C code, which is then compiled to native code and run like any other natively written MPI application. This provides a very portable yet high-performance way to run distributed-memory Titanium applications. - If the configure script finds UDP support on the machine, then the "udp-cluster-uniprocess" and "udp-cluster-smp" backends will be available. These provide a portable way to run distributed-memory applications for testing and debugging purposes, or for clusters lacking high-performance networking hardware. They should work on any cluster that provides a standard TCP/IP stack and sshd. - The GASNet backends provide high-performance native support for the following system-area networks: IBM SP: gasnet-lapi-uni, gasnet-lapi-smp Quadrics: gasnet-elan-uni, gasnet-elan-smp Myrinet: gasnet-gm-uni, gasnet-gm-smp Infiniband: gasnet-vapi-uni, gasnet-vapi-smp Cray/SGI SHMEM: gasnet-shmem-uni In order to enable these you'll need to pass explicit configure switches (see above). - If the configure script finds the AM-2 runtime library (libam2) then the "now-cluster-uniprocess" backend will be available. This uses a Berkeley-style Network of Workstations (NOW) for parallelism, with Active Messages II providing communications. This is one way to run Titanium in distributed-memory environments. The configure script will look for all of the above and build those backends that seem appropriate (the configure output indicates which backends were detected and selected). It should all just work automatically, however there are configure options for suppressing certain backends if you wish to do so. Build the compiler ================= Once the configure script finishes, type "make" (or "gmake") to build the compiler. This will take a while. After the compiler itself is built, you must precompile the standard class libraries. These include such things as java.lang.String, java.io.FileInputStream, and so on. Use the following commands: % cd runtime/backend % make tlib Each backend has two precompiled libraries: one built for debugging, and one built optimized. So this will take a rather long time. For example, on one of the Berkeley machines, the total time from unpack to all prebuilt libraries was just under two hours. Using the compiler ================== You now have the Titanium compiler and the prebuilt standard class libraries. At this point, you have two options. You can use the Titanium compiler in place, right out of the build tree. This is handy if you are just trying things out, or if you are the only one using it. Or you can actually install everything so that other people can use Titanium as well. To use the compiler in place, use the "tcbuild" script in the "tcbuild" subdirectory. See the Titanium home page for documentation on how to run the compiler using "tcbuild." To install the compiler for others to use, type "make install" (or "gmake install"). By default, the various components of the compiler and runtime will be installed under "/usr/local". To install it elsewhere, consider using "--prefix" or other related options when you run the configure script. Contact Info ============ If problems arise, or you have any further questions, please send mail to titanium-devel@cs.berkeley.edu. Please send as much detail as possible, even if it seems irrelevant. Cross Compilation Instructions ============================== Titanium includes limited support for cross-compilation, on systems where it is undesirable or impossible to build the compiler and/or build applications on a machine which is identical in configuration to the compute nodes which will run applications. Below are guidelines for cross compiling on specific systems: -------------------------------------------------------------------------- Cross-compile build instructions for Cray X-1: Titanium can be built and used on the Cray X-1 using the X-1 frontend nodes - ie, without any cross-compilation : just login to the X-1/Unicos system and configure/make as with any other system. However, the configure, build and app build steps are all very slow due when run in this manner due to the slowness of the Cray C compiler on the X-1, which actually transparently ships source files to a C compile server that performs cross-compilation behind the scenes. If your X-1 allows direct user login access to the cross-compile server, you can skip all this overhead by building the Titanium compiler as a cross-compiler on that machine and building your Titanium applications from that machine instead. The recommended network on the X-1 is gasnet-shmem-uni, however sequential, smp and mpi-based backends are also available. 1. cd into the top source directory 2. symlink the appropriate cross configure script into your top-level source dir: ln -s runtime/gasnet/other/contrib/cross-configure-phoenix . 3. open the cross-configure script in an editor and edit the values as necessary to match your system's compiler paths: - CC and MPI_CC should both be set to the working MPI compiler for the *target* architecture. - MPIRUN_CMD might need tweaking based on your job system setup - CXX needs to be set to a working C++ compiler for the *host* architecture, as it's used to build the Titanium compiler binaries that will run on the host machine during Titanium application compilation. NOTE: you may need to set GCC_EXEC_PREFIX=/usr/bin/ in your environment during configure *and* build in order to get the correct assembler when using g++ as CXX (just add it to your login scripts) 4. cd into your desired build directory 5. run the cross-configure script and pass in any desired options, eg: ../srcdir/cross-configure-phoenix --enable-shmem --enable-mpi --prefix=/path/to/install 6. build (and optionally install) as usual: gmake ; gmake -C runtime/backend tlib ; gmake install -------------------------------------------------------------------------- Cross-compile build instructions for BlueGene/Lite & Cray XT-3: These platforms are only supported using cross-compilers, because for various reasons the compute nodes cannot run the C compiler and associated machinery. These platforms currently only support the sequential and low-performance mpi-based backends, and have not been tuned in any way. 1. cd into the top source directory 2. symlink the appropriate cross configure script into your top-level source dir: BG/L: ln -s runtime/gasnet/other/contrib/cross-configure-bgl . XT-3: ln -s runtime/gasnet/other/contrib/cross-configure-phantom . 3. open the cross-configure script in an editor and edit the values as necessary to match your system's compiler paths: - CC and MPI_CC should both be set to the working MPI compiler for the *target* architecture. - MPIRUN_CMD might need tweaking based on your job system setup - CXX needs to be set to a working C++ compiler for the *host* architecture, as it's used to build the Titanium compiler binaries that will run on the host machine during Titanium application compilation. 4. cd into your desired build directory 5. If you want to compile with gcc, set USE_GCC=1 in the environment, otherwise the default is xlc (for BG/L) and pgcc (for XT-3) 6. run the cross-configure script and pass in any desired options, eg: BG/L: ../srcdir/cross-configure-bgl --enable-mpi --prefix=/path/to/install XT-3: ../srcdir/cross-configure-phantom --enable-mpi --prefix=/path/to/install 7. build the Titanium compiler: gmake BG/L NOTE: the compiler doesn't quite build "out of the box" with xlc due to a few bugs in xlc 7.0, here are the workarounds: xlc: runtime/omega-runtime/omega_lib/obj/Exit.o fails to build, copy compile command and recompile without -qnoeh xlc/ndebug: runtime/backend/*/dtoa.o fails to build, copy compile command and recompile without -qsmp=noauto -O5 8. build the tlibs (and optionally install) as usual: gmake -C runtime/backend tlib ; gmake install -------------------------------------------------------------------------- Generic cross-compile build instructions: If your system is not listed above as a supported cross-compilation system, you may still be able to make it work yourself with the following generic instructions -- note that while Titanium is quite portable, for obvious reasons we don't guarantee it will work out of the box on machine types we've never tried before. 1. Follow the generic cross-compilation instructions in: runtime/gasnet/other/cross-configure-help.c during which you'll create a cross-configure script in your top-level source directory. 2. Edit the cross-configure script to ensure it correctly reflects the characteristics of the target architecture. Note that for Titanium, the C++ compiler in $CXX should be one appropriate for the *frontend* machine, as C++ is used to build the tc compiler, which runs on the frontend machine to build program executables. 3. Run cross-configure with the same options you'd pass to configure, as described in the regular build instructions. 4. Build tc and the tlibs as described in the regular build instructions. --------------------------------------------------------------------------