Firstly you need to decide a number of issues :
1.  whether your job needs to be parallel (mpi) or not
2.  whether your job needs to run on gpu
3.  whether your job need to run gromacs-4* or gromacs-5*

There are various modules compiled to work with different combinations of the above options.  
Once you have decided you need to have an appropriate job submit file.

Have a look at http://www.gromacs.org/Documentation/Acceleration_and_parallelization

-------------------------------------------------------------------------------------------

Non-mpi
If you are happy to work within one node (Note there are 16 cores on the ethernet nodes), then you should be able to use the non-mpi versions on the standard ethernet queues
modules available are :
1.  gromacs/4.5.7/intel
2.  gromacs/5.1/intel
3.  gromacs/5.1.4/gcc
see the relevant bin directory for available commands in /gpfs/software/gromacs/

Example script for a single node job (16 cores)
    #!/bin/sh
    #BSUB -q long-eth
    #BSUB -x
    #BSUB-J pdb2gmx_job
    #BSUB -oo pdb2gmx.out
    #BSUB -eo pdb2gmx.err

    . /etc/profile
    module add gromacs/4.5.7/intel  (or gromacs/5.1/intel or gromacs/5.1.4/gcc)

    cd <put in the path to the right directory>

    pdb2gmx_d -ignh -ff 53a6 -f 4fdtchaina-coot-0.pdb -o 4fdtchaina-coot-0gmx.pdb -p 4fdtchaina-coot-0.top -water spc

-------------------------------------------------------------------------------------------

Parallel
If you need more than 16 cores, you will need to pick one of the parallel modules.  Note there are 28 cores on the mellanox nodes.
The key to parallel jobs is getting the right number of slots - domain decomposition - hopefully you know about this already.
see http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html

modules available are :
1.  gromacs/4.6.5/intel-mellanox
see /4.6.5/intel-mellanox/bin/  for commands to use (eg gmx_mpi, gmx_mpi_d).

submit script example
    #!/bin/bash
    #BSUB -q mellanox-ib
    #BSUB -n  ????    (note this has to match the number of cores you need)
    #BSUB -R 'span[ptile=28]'
    #BSUB -oo test.log
    #BSUB -eo test.err
    #BSUB -J "test"

    . /etc/profile
    module add gromacs/4.6.5/intel-mellanox

    cd <put in the path to the right directory>

        pdb2gmx_mpi_d -ignh -ff 53a6 -f 4fdtchaina-coot-0.pdb -o 4fdtchaina-coot-0gmx.pdb -p 4fdtchaina-coot-0.top -water spc

2.  gromacs/5.1.4/gcc-openmpi-mellanox

submit script example
    #!/bin/bash
    #BSUB -q mellanox-ib
    #BSUB -n  ????    (note this has to match the number of cores you need)
    #BSUB -R 'span[ptile=28]'
    #BSUB -oo test.log
    #BSUB -eo test.err
    #BSUB -J "test"

    . /etc/profile
    module add gromacs/5.1.4/gcc-openmpi-mellanox

    cd <put in the path to the right directory>
    gmx_mpi mdrun -npme -1 -s step7_production-7.tpr -o step7_production-7.trr -c step7_production-7.gro -x step7_production-7.xtc -e             step7_production-7.edr -g step7_production-7.log -cpo step7_production-7.cpt
-------------------------------------------------------------------------------------------

GPU
If you need to run on the gpu.  Note gromacs will apportion work between the gpu and the cpu as appropriate

modules available are :
1.  gromacs/5.1.4/gcc-gpu-cuda8
see /gpfs/software/gromacs/5.1.4/gcc-gpu-cuda8/bin/  for commands to use (eg gmx).

example script:
    #!/bin/bash
    #BSUB -q gpu
    #BSUB -oo PME-opt-%J.log
    #BSUB -eo PME-opt-%J.err
    #BSUB -J "PME-opt"

    . /etc/profile
    module add gromacs/5.1.4/gcc-gpu-cuda8

    cd <put in the path to the right directory>

    gmx grompp -f <mdp-file>
    or
    gmx mdrun -s j10-310_5esp12-6.tpr

 

Firstly you need to decide a number of issues :
1.  whether your job needs to be parallel (mpi) or not
2.  whether your job needs to run on gpu
3.  whether your job need to run gromacs-4* or gromacs-5*

There are various modules compiled to work with different combinations of the above options.  
Once you have decided you need to have an appropriate job submit file.

Have a look at http://www.gromacs.org/Documentation/Acceleration_and_parallelization

-------------------------------------------------------------------------------------------

Non-mpi
If you are happy to work within one node (Note there are 16 cores on the ethernet nodes), then you should be able to use the non-mpi versions on the standard ethernet queues
modules available are :
1.  gromacs/4.5.7/intel
2.  gromacs/5.1/intel
3.  gromacs/5.1.4/gcc
see the relevant bin directory for available commands in /gpfs/software/gromacs/

Example script for a single node job (16 cores)
    #!/bin/sh
    #BSUB -q long-eth
    #BSUB -x
    #BSUB-J pdb2gmx_job
    #BSUB -oo pdb2gmx.out
    #BSUB -eo pdb2gmx.err

    . /etc/profile
    module add gromacs/4.5.7/intel  (or gromacs/5.1/intel or gromacs/5.1.4/gcc)

    cd <put in the path to the right directory>

    pdb2gmx_d -ignh -ff 53a6 -f 4fdtchaina-coot-0.pdb -o 4fdtchaina-coot-0gmx.pdb -p 4fdtchaina-coot-0.top -water spc

-------------------------------------------------------------------------------------------

Parallel
If you need more than 16 cores, you will need to pick one of the parallel modules.  Note there are 28 cores on the mellanox nodes.
The key to parallel jobs is getting the right number of slots - domain decomposition - hopefully you know about this already.
see http://manual.gromacs.org/documentation/5.1/user-guide/mdrun-performance.html

modules available are :
1.  gromacs/4.6.5/intel-mellanox
see /4.6.5/intel-mellanox/bin/  for commands to use (eg gmx_mpi, gmx_mpi_d).

submit script example
    #!/bin/bash
    #BSUB -q mellanox-ib
    #BSUB -n  ????    (note this has to match the number of cores you need)
    #BSUB -R 'span[ptile=28]'
    #BSUB -oo test.log
    #BSUB -eo test.err
    #BSUB -J "test"

    . /etc/profile
    module add gromacs/4.6.5/intel-mellanox

    cd <put in the path to the right directory>

        pdb2gmx_mpi_d -ignh -ff 53a6 -f 4fdtchaina-coot-0.pdb -o 4fdtchaina-coot-0gmx.pdb -p 4fdtchaina-coot-0.top -water spc

2.  gromacs/5.1.4/gcc-openmpi-mellanox

submit script example
    #!/bin/bash
    #BSUB -q mellanox-ib
    #BSUB -n  ????    (note this has to match the number of cores you need)
    #BSUB -R 'span[ptile=28]'
    #BSUB -oo test.log
    #BSUB -eo test.err
    #BSUB -J "test"

    . /etc/profile
    module add gromacs/5.1.4/gcc-openmpi-mellanox

    cd <put in the path to the right directory>
    gmx_mpi mdrun -npme -1 -s step7_production-7.tpr -o step7_production-7.trr -c step7_production-7.gro -x step7_production-7.xtc -e             step7_production-7.edr -g step7_production-7.log -cpo step7_production-7.cpt
-------------------------------------------------------------------------------------------

GPU
If you need to run on the gpu.  Note gromacs will apportion work between the gpu and the cpu as appropriate

modules available are :
1.  gromacs/5.1.4/gcc-gpu-cuda8
see /gpfs/software/gromacs/5.1.4/gcc-gpu-cuda8/bin/  for commands to use (eg gmx).

example script:
    #!/bin/bash
    #BSUB -q gpu
    #BSUB -oo PME-opt-%J.log
    #BSUB -eo PME-opt-%J.err
    #BSUB -J "PME-opt"

    . /etc/profile
    module add gromacs/5.1.4/gcc-gpu-cuda8

    cd <put in the path to the right directory>

    gmx grompp -f <mdp-file>
    or
    gmx mdrun -s j10-310_5esp12-6.tpr