Posts

Showing posts from June, 2016

VNC connect from Mac OSX El capitan to Ubuntu Linxu

Edit 2016-11-05

This is actually trivial, here are the steps.


On the remote Ubuntu "workstation", run
$ vncserverOn the home Mac machine, run your favorite SSH tunnel
$ ssh -t -L 5901:localhost:5901 workstationOpen up the vnc screen in a Mac window
$ open vnc://localhost:5901 

Testing modern Symfony apps

Introduction The words "modern" and "PHP framework" would usually be considered oxymorons – and for good reasons. However, here is no point in denying that a large number of things on the internet relies on Symfony or some other PHP framework to run. These things will be gradually phased out or upgraded but for now they can't just remain untested because they're somewhat (out)dated...

API testing The more rigorous part is RESTful API testing that relies on sending json to server and getting server back from the server. Let's create a boilerplate class that's going to help us send POST/GET/DELETE requests wrapped around as methods:


And now we're going to use that ancestor class to facilitate some real-life testing of user profile reads and updates.

To run the tests do the following

Frontend functional/flow testing While valuable, the API testing will not cover certain aspects of the application flow. For example you changed some <script> d…

Gromacs 5.x on TITAN Cray machine

For compilation instructions checkout https://groups.google.com/d/msg/plumed-users/Tx29XNNRq8o/xeAu7RNaBAAJ

For a while we've been preparing to run some simulations on this machine, hosted by Oak Ridge National Lab. Every cluster is a bit different and that's definitely true for TITAN: each box has 16 CPUs (arranged in 2 "numas") and 1 K20 nVidia GPU. There is no usual MPI or Infiniband, there is some other Cray-specific beast.

Here is an example submission script for a non-replica exchange simulation:

#!/bin/bash
module add gromacs/5.0.2
cd $PBS_O_WORKDIR # important - allow the GPU to be shared between MPI ranks export CRAY_CUDA_MPS=1
mpirun=`which aprun` application=`which mdrun_mpi`
options="-v -maxh 0.2 -s tpr/topol0.tpr "
gpu_id=000000000000 # only 12, discard last '0000'
$mpirun -n 32 -N 16 $application -gpu_id $gpu_id  $options
Submit with


$ qsub -l walltime=1:00:00 -l nodes=2 submit.sh
Requesting for 2 boxes, start 32 MPI processes/ranks, 16 per box.…