An overview of changes between EMAN1 and EMAN2

We have tried to preserve as many of the conventions from EMAN1 as possible, to limit the difficulty in making the transition. For example, just like EMAN1, middle-clicking (alt-click on Mac) on just about any window will bring up a powerful 'control-panel' for that window. In addition to the 2-D image display widgets in EMAN1, EMAN2 includes a powerful set of 3-D display widgets. While in no way are these designed to compete with dedicated packages like Chimera, they do provide users with a quick way of looking at 3-D maps and other 3-D data.

Also :

Translation Table

Here are a few of the more common EMAN1 commands and their EMAN2 equivalents:

EMAN1

EMAN2

Comments

proc2d

e2proc2d.py

ordering of command-line options matters in EMAN2, and it is possible to specify a series of ordered image processing operations in one command. In EMAN2 can work with 3d MRC image stacks in addition to traditional multi-image files.

proc3d

e2proc3d.py

see proc2d. In EMAN2 can support sets of 3-D volumes in a single file (HDF and BDB only)

iminfo

e2iminfo.py and e2bdb.py

e2bdb.py works only with BDB databases, but has similar functions. e2iminfo.py also can work with BDB databases

speedtest

e2speedtest.py

The numbers from EMAN1 and EMAN2 are not directly comparable, but have a similar range

refine

e2refine.py

MANY more options in EMAN2. In particular, note the --twostage option which can produce speedups of 5-25x while retaining accuracy

refine2d

e2refine2d.py

Much faster than EMAN1, with some minor changes in operation.

multirefine

e2refinemulti.py and e2classifyligand.py

Files are organized very differently than EMAN1, but functions in a similar way (though much faster). e2classifyligand is a different program, but can be used for 2-way splits of data

eman

e2projectmanager.py

The workflow interface replaces EMAN1's 'custom tutorials'

boxer

e2boxer.py

Works completely differently than EMAN1. Same overall purpose and name.

v2, v4, eman browser

e2display.py

e2display provides a browser or can be launched on a single image file from the command-line, and shows files of any supported type

ctfit, fitctf

e2ctf.py

CTF determination is now fully automatic including structure factor determination for 90% of specimens. Easier to use from the workflow

There are, of course, many others.

Everything is Modular

In EMAN1, each filter or option had its own name. For example in proc2d, you had 'lp' for a low-pass Gaussian filter or 'tlp' for a sigmoidal filter. In refine, you would pick between 'phasecls', 'fscls' or 'dfilt'. Every time we wanted to add a new capability, we had to code it into every program and implement new option names, etc. In addition, when using 'lp' you also had to know that the value you specified was a radius in Fourier pixels, unless you also specified apix= in which case it would be 1/half width in A. This is messy, and tends to cause mistakes.

What's the alternative, you ask. The answer, a modular system. In EMAN2, each class of algorithm, such as filters (processors), aligners, cmps (similarity metrics), etc. maintains a list of all available algorithms, and any program using one of these categories can use any algorithm from the list. Each modular operation takes a list of parameters, and the parameter names are matched whenever possible.

For example, say you had a 3-D model and you wanted to high pass-filter it, mask it with a sharp spherical mask, then low pass filter the final result. In eman1, the only safe way to do it was a series of 3 commands:

proc3d model.mrc model.mrc hp=100 apix=2
proc3d model.mrc model.mrc mask=42
proc3d model.mrc model.mrc lp=10 apix=2

In EMAN2, it can be done with a single command, and, if more verbose, the options are clear and readable.

e2proc3d.py model.mrc model.mrc --process=filter.highpass.gauss:cutoff_freq=.01 --process=mask.sharp:outer_radius=42 --process=filter.lowpass.gauss:cutoff_freq=.1

and if you wanted to use a hyperbolic tangent lowpass filter instead of a gaussian, you would simply replace 'filter.lowpass.gauss' with 'filter.lowpass.tanh'. The parameter would be exactly the same.

Of course, if the system is modular, you need some mechanism to find out what the available options are. In the GUI interface, it will present you with a menu of the available options. However, for the command line, or if you want the detailed documentation for any particular option, you use the e2help.py command. For example to list all of the processors, which includes, filters, masks, mathematical operations, etc (178 of them at last count), you would type e2help.py processors. This will give a list of 1 processor per line with parameter names. If you want more details, and a definition of each parameter, then 'e2help.py processors -v 2' will give you a more detailed listing.

Similarly, say you want to specify what similarity metric is used when comparing particles to projections during classification. You can get a list by saying e2help.py cmps. To get a list of the categories available in e2help, just type 'e2help.py' with no options and it will list the categories (at present: processors, cmps, aligners, averagers, projectors, reconstructors, analyzers, symmetries and orientgens).

This system currently embodies over 240 different algorithms for a wide range of different purposes. If you have some image processing task, chances are that e2proc2d.py or e2proc3d.py with the --process option can meet your need. In addition, there is a GUI interface e2filtertool.py which allows you to graphically create filter chains and adjust their parameters interactively. Don't know how much to low pass filter that model ? Run e2filtertool.py on it, and you can play with different filters and parameters to your heart's content.

Everything is Saved and (hopefully) Defined

While EMAN1 did preserve some of the information generated during refinement, there were some omissions that people found frustrating. In EMAN2, we try to preserve everything computed during the refinement (with a few impractical exceptions). While this can take a lot of extra disk space during processing, you are always free to delete any intermediate files you don't want and increasingly, disk space is cheap. The EMAN2 Wiki contains pages documenting everything we store:

Where Have the LST files gone ?

While EMAN2 can read EMAN1 style LST files, they are not used in any of EMAN2's standard processes. Instead, there is the concept of a 'virtual database'. EMAN2 stores most image data in a serverless database system based on BerkeleyDB. These database files can contain metadata (information about the images) as well as data (the images themselves). In a 'virtual database', the metadata is stored, but the image data is drawn from a different database. This mechanism is used for 'sets' in the EMAN2 workflow, permitting you to try processing your data using various subsets of the data. It is worth taking a little time to read about the database.

Workflow

EMAN2 has adopted a workflow system for most common operations: e2projectmanager.py. This system can take you step by step through processes such as single particle reconstruction, single particle tomography, random conical tilt, etc.

Browser

e2display.py is a file browser and display program, which can examine any supported file, including the BerkeleyDB database files. It can also be launched from the workflow interface. When browsing files, remember that right-clicking on a file will bring up a menu of options other than the default (double-click) visualization.

Single Particle Refinement in EMAN2 vs EMAN1

In this section, we consider how traditional single particle refinements worked in EMAN1, and how they now work in EMAN2. One of the largest differences is, due to many user requests, EMAN2 now saves all intermediate information, and leaves it to the user to delete things they don't need (after all, disk space has become relatively cheap). Preserving this information also permits a number of new algorithms to be considered, which were not feasible in EMAN1.

The refinement strategy in EMAN1 is:

  1. start with projections & and initial 3-D model

  2. reproject the 3-D model (project3d)

  3. reference-based classification of particles (classesbymra)

  4. iterative class-averaging (classalignall -> classalign2)

  5. reconstruction by direct Fourier inversion (make3d)

  6. post-processing (masking, mass adjustment, filtration)
  7. iterate -> 2

EMAN2, without the twostage option, is very similar:

  1. start with projections & and initial 3-D model

  2. reproject the 3-D model (e2project3d)

    • Projection orientations are 'perturbed' slightly in each iteration to help prevent buildup of noise bias. This may adversely impact resolution as compared to EMAN1 (slightly), but the resolution you get in EMAN2 is more likely to be 'real'.
  3. reference-based classification of particles
    1. compute a similarity matrix between the particles and the projections (e2simmx)

    2. classify the particles based on the similarity matrix (e2classify)

  4. iterative class-averaging (e2classaverage)

    • In EMAN1 when classiter was specified, there was a 2 offset, ie - 3 actually meant 1. In EMAN2, 1 means 1, 2 means 2, etc.
  5. reconstruction by direct Fourier inversion (e2make3d)

  6. post-processing (masking, mass adjustment, filtration)
  7. iterate -> 2

Finally, we consider how the twostage option, which increases overall speed by 2-30x, impacts the process:

  1. start with projections & and initial 3-D model

  2. reproject the 3-D model (e2project3d)

    • Note that EMAN2 can use smaller angular steps than EMAN1 with much better scaling as long as twostage is specified
  3. reference-based classification of particles
    1. compute a similarity matrix between the particles and the projections (e2simmx2stage)

      1. compute a similarity matrix between all of the projections (e2simmx)

      2. identify a subset of projections which are the most similar
      3. align and average the similar projections together to produce reduced representation for initial classification

      4. compute a similarity matrix between the particles and the reduced set of averaged projections (e2simmx)

      5. Identify the best N reduced representation projections for each particle to identify the specific orientations we must check
      6. Compute the normal similarity matrix between particles and the full projections, but this matrix is sparsely populated (this sparseness is where the time savings occurs)
    2. Classify the particles based on the similarity matrix (e2classify)

  4. iterative class-averaging (e2classaverage)

  5. reconstruction by direct Fourier inversion (e2make3d)

  6. post-processing (masking, mass adjustment, filtration)
  7. iterate -> 2

There are many output files produced by refinement in EMAN2, following its strategy of keeping everything, unless the user explicitly removes it, These files are documented in the last section here.

EMAN2/Eman1Transition/Eman1v2 (last edited 2014-04-11 12:54:55 by SteveLudtke)