Biowebtronics Biotech, startups, web development and internet of things.

Flow chemistry automation with Pumpy

Flow chemistry is pretty hot right now, check out anything by Steve Ley he does some cool stuff. Flow chemistry and microfluidics are really popular, they enable precision control over the spatial distribution of materials during continuous processes. This is useful when one wants to continuously combine materials such as in a chemical process ultimately yielding a higher throughput when compared to batch processing. Flow chemistry and microfluidics also enable the emulation of biological systems, replicating similar kinetics and shear flows at surfaces which are not necessarily possible in bulk processes.

During my PhD I used two PHD 2000 Harvard apparatus syringe pumps to run an experiment where I would alternately flow two solutions through a membrane for 40 hours. Getting into programming at the time I wanted to automate this. Here's a quick video of the set up.

Automation with Pumpy and a Raspberry Pi

A colleague of mine, Tom Phillips, at Imperial College had made a really nice python package called Pumpy. Pumpy enabled me to express protocols as python scripts. Which means that they can be version controlled with Git and shared and improved with GitHub. This is really what I want the future of protocols to be like, because it enables so much in for collaboration on method development and ultimately repeatability in the lab.

The set up required:

  • 1 Raspberry Pi B+
  • 1 USB to RS232 converter
  • 2 RS232 cables
  • + all of the tubing and connectors etc for the fluidics
  • I also used a Sensirion solid-state flow meter (these things are awesome!). Though sadly I didn't have time to get this working with the Raspberry Pi.

Because I'm a super nerd and have the excitement of a puppy when my devices talk to me via the internet (#IoT) I added a pushover method to send me a push notification at the start of each cycle of the alternating process.

pump notifications Internet of pumps #IoP

This is what a Pumpy protocol would look like to do an alternated flow from two pumps:

import sys
import pumpy
import logging
import time
import os
import httplib, urllib

pushover_user_key = os.environ['PUSHOVER_USER_KEY']
pushover_app_key = os.environ['PUSHOVER_PUMPY_APP_TOKEN']

## Here I used pushover to notify on the run progress.
def push(message):
  conn = httplib.HTTPSConnection("api.pushover.net:443")
  conn.request("POST", "/1/messages.json",
    urllib.urlencode({
      "token": pushover_app_key,
      "user": pushover_user_key,
      "message": message,
    }), { "Content-type": "application/x-www-form-urlencoded" })
  conn.getresponse()

logging.basicConfig(level=logging.INFO)

## Communication config
chain = pumpy.Chain('../../../../../dev/ttyUSB0')
PHDcoll = pumpy.PHD2000(chain,address=1, name="PHDcoll") # PHD2000
PHDha = pumpy.PHD2000(chain,address=12, name="PHDha") # special pump

## Experimental parameters
tVol = 2000 # 2mL
tTime = 60 # 60 minutes
flowRate = tVol/tTime
cycles = 21 #21 does 20 cycles

# Set diameters BD plastpak 50/60mL
PHDcoll.setdiameter(26.7)
PHDha.setdiameter(26.7)
# Set Flow Rates
PHDcoll.setflowrate(flowRate)
PHDha.setflowrate(flowRate)
# Set each target volume for each infuse.
PHDcoll.settargetvolume(tVol)
PHDha.settargetvolume(tVol)

## Begin the alternating process
for i in range(0,cycles):
  push("Starting Coll cycle: " + str(i))
  PHDcoll.infuse()
  logging.info('coll: infusing, cycle ' + str(i))
  PHDcoll.waituntiltarget()
  PHDcoll.stop()
  logging.info('coll: stopped infusing, cycle ' + str(i))
  push("Starting HAnp cycle: " + str(i))
  PHDha.infuse()
  logging.info('HAnp: infusing, cycle ' + str(i))
  PHDha.waituntiltarget()
  PHDha.stop()
  logging.info('HAnp: stopped infusing, cycle ' + str(i))

push("Job Complete :)")
sys.exit()

Tom did a really nice job with Pumpy, it's so straight forward to use. I recommend that anyone who does any microfluidics or flow chemistry consider giving this a go. What's even cooler is you can ssh onto the Raspberry Pi from anywhere and execute experiments!

Biochemistry from a coffee shop in Vienna

Just a flat white and some E. coli please

Drinking a flat white in a coffeeshop in Vienna, I eagerly anticipate the arrival of a cell culture plate with bacteria on it.

The E. coli were in California and not on the table in font of me. The bacterial samples I had just sent to Transcriptic had arrived a couple of days ago. Whilst I was in Vienna I was keen to begin running experiments on the strains and start generating results.

Pioneers Festival

I was in beautiful Vienna because I was at Pioneers Festival 2015. It was a fantastic celebration of entrepreneurs and dreamers. The atmosphere was electric, everyone left with the desire to think bigger and go forth and execute.

vienna photo

Working on the Transcriptic platform

I was keen to get started on experimenting with my samples at Transcriptic so I was squeezing in the work around Pioneers. My main aim for the week was to ensure that the correct quantity of bacteria was being plated onto LB-Agar to ensure that single colonies could be grown and picked. This process forms the first stage of an assay I have planned.

What was really exciting is that I was working with a few of the new actions for the platform, spread,autopick and image_plate. I think the team knew I was super keen to get started using them, so these actions are currently still in flux and are not fully documented yet. These three commands work really nicely together to enable starting from stock strains and getting down to individual clonal colonies to conduct downstream work with. Below you can see the product of all two of the actions.

image plate photo

Every time I do work on the Transcriptic platform my mind is slightly blown. I just think it's absolutely amazing that I can sit on a laptop and do experiments, even when I'm just doing a simple dilution.

I would say that the previous work I've done on Transcriptic was just playing but now I'm trying to do some actual work-work. And with that work-work I did run into a couple of challenges in using the platform.

Challenges

I had a couple of challenges, working from Vienna. Primarily the internet in the hotel was terrible so I was struggling to download large image captures from the workcell from the image_plate function. I also ran into a frontend bug, however Taylor at Transcriptic reached out to me to let me know they were aware I was having a problem. The team fixed the bug straight away, even before I'd got in touch with them! Preemptive customer support, I coined it ;).

1. Queuing (and I'm English!)

queue screengrab

So I've been spoiled by PaaS and IaaS. One thing which I really didn't think about is workcell capacity. Using AWS, Heroku or digital ocean one just doesn't think about capacity. If you need more capacity on Heroku just create a new dyno, it's done instantly. With cloud science, not so much...

The issue is that Transcriptic is so popular that there's a lot of people trying to simultaneously conduct runs on the hardware. Runs are made up of multiple actions that may or may not need to be executed in sequence. It is possible to conduct multiple runs at once on a single workcell however that's a tough scheduling task. To achieve this you basically need to solve some really hard traveling salesman like optimisation problems which, from speaking to the Transcriptic team at SynBioBetaUK, is one of their biggest areas of focus.

To be fair I don't really have an issue with queuing but I would really like to know how long the queue is so I can manage my expectations. Queuing makes sense though right? If you're a start up, you'd be fucking crazy to drop all that CapEx on building 50(?) work cells with unknown customer demand. If I was doing it, I'd probably push queue times to the point where a customer threatens to churn, then think about building another one! My impatience is really just a symptom of my excitement for the platform.

2. 'Pacific Standard Time' you cruel mistress

So this whole time difference thing is a little awkward. I might schedule a run at 9AM GMT, Transcriptic sees it 7 hours later at 8AM PST, then it possibly gets scheduled for 7PM PST (3AM GMT) if the workcells are super busy. That's probably an exaggerated example, but I think it illustrates what feels like not such a great experimental cycle for me. That's going to be the case though, until european work cells exist so I'll just keep quiet and be thankful they work with europeans!

3. Run Chaining

Right now, you can't chain successive runs together. This would be useful, because runs usually occur or finish during the US work hours when I'm asleep. So it would save a bit of wasted time re the whole time difference thing.

fin

So there are challenges with doing experiments over the internet, across the globe, on robots.. who'da thunk it?. The important thing however, is that it's working really well and I am getting results. To be clear, these are my experiences and not complaints. I see these as challenges that Transcriptic are aware of and working to improve all the time. So Transcriptic, keep up the good work and I'll keep updating y'all on my experiences using the platform.

PS

Oh by the way I found out that Kristian Nairn who plays Hodor in Game of Thrones is a house DJ which is pretty awesome. He actually played the Pioneers after party! Check out his sets on soundcloud.

Making Autoprotocols more flexible

On my first attempt at translating an experimental protocol to the Autoprotocol format I got as far as creating a run on Transcriptic with the protocol which is awesome. The downside of that Autoprotocol though, was that the samples being used in the experiment were hardcoded into the python script, so any variation in experimental samples or parameters would have to be made in the python.

Thankfully the awesome people behind Autoprotocol made it possible to create a protocol that is parameterised. The Autoprotocol can be packaged up and uploaded to Transcriptic where Transcriptic's web app generates a user interface making it easy for the user to just type in the experimental parameters and hit run as you can see in the screenshot.

protocol ui screenshot

How to package up an Autoprotocol python script

I created my assay package by leaning heavily on the protocols in Autoprotocol-core and the Transcriptic Runner documentation.

Change how the protocol is wrapped

The first thing to do was replace the start and the end of the Python script. At the start of old protocol there was a Protocol object where all the refs and actions get attached to.

import json
from autoprotocol.protocol import Protocol

p = Protocol()

# ...your protocol...

And at the end the whole the whole thing was dumped as a JSON object in Autoprotocol format:

# Builds the Autoprotocol JSON
print json.dumps(p.as_dict(), indent=2)

In the new protocol we don't want the python script to dump JSON as it now has to operate slightly differently. Because of this we also don't need to instantiate a Protocol object.

So the start now looks like this:

from autoprotocol.util import make_dottable_dict

def assay_name(protocol,params):
    params = make_dottable_dict(params)

    # ...your protocol...

In the new protocol every action is wrapped up inside a method assay_name in the example, that accepts the arguments protocol and params.

The end of the protocol now looks like this:

if __name__ == '__main__':
    from autoprotocol.harness import run
    run(assay_name, 'AssayName')

Let's move on to the next task of parameterisation.

Remove hardcoded parameters

In the old protocol we defined a lot of fixed parameters, things like the type of container, what volume of sample, which sample, what wavelengths to make spectral measurements. All of this can be parameterised ultimately making the protocol more flexible and easier to use by colleagues who aren't confident editing a python script.

In the old protocol the user didn't have a choice of media I would always force the use of LB broth doped with ampicillin, because it was hardcoded into every dispense command. But what if a user wanted to use un-doped LB? Adding this flexibility is straight forward, by checking the experimental parameters passed to the protocol method and assigning the choice to a variable to use whenever media is needed. In the following example the choice between two types of media is made by checking the params object if simple conditional statements.

## Check the params object for the choice of media and make sure only one choice is made.
if params["media"]["lb-broth-100ug-ml-amp"] and not params["media"]["lb-broth-noAB"]:
    # Set the growth_media variable to the users choice.
    growth_media = "lb-broth-100ug-ml-amp"
elif params["media"]["lb-broth-noAB"] and not params["media"]["lb-broth-100ug-ml-amp"]:
    growth_media = "lb-broth-noAB"
else:
    ## Notify the user that they need to make a choice.
    raise RuntimeError("You must select a growth medium.")

protocol.dispense(96_well_container, growth_media, [{'column': 0}])

So where does the params object get populated?

Introducing Manifest.json

Manifest.json serves a couple of purposes. It is where the assignable parameters for the protocol are defined, in addition to a set of parameters that are used as defaults when we preview the protocol in testing with Transcriptic Runner.

This is the example manifest.json from the documentation:

{
  "version": "1.0.0",
  "format": "python",
  "license": "MIT",
  "protocols": [
    {
      "name": "SampleProtocol",
      "command_string": "python -m my_protocols.sample_protocol",
      "description": "this is a sample protocol",
      "preview": {
        "refs": {
          "sample_plate": {
            "type": "96-pcr",
            "discard": true
          }
        },
        "parameters": {
          "source_sample": "sample_plate/A1",
          "dest_sample": "sample_plate/A2",
          "transfer_vol": "5:microliter"
        }
      },
      "inputs": {
        "source_sample": "aliquot",
        "dest_sample": "aliquot",
        "transfer_vol": "volume"
      },
      "dependencies": []
    }
  ]
}

You can see it kicks off with some definitions around the version and other housekeeping details about the protocol. In the example there is just one protocol, however a whole array of protocols can be defined. Important bits here are:

  1. command_string the path to the python script of the protocol
  2. preview the parameters and refs used in the preview (good for testing locally)
  3. inputs dictates the fields offered to the user to populate the params object

Let's look at the inputs but for the growth media choice I mentioned earlier:

{
  "inputs": {
    "media": {
      "type": "group",
      "description": "Type of media to grow bacteria in. (check off only one)",
      "inputs": {
        "lb-broth-100ug-ml-amp": {
          "type": "bool"
        },
        "lb-broth-noAB": {
          "type": "bool"
        }
      }
    }
  }
}

Under the "media" property there are two inputs one for each medium, the "type" of input is bool indicating that the selection input is either true or false.

For this kind of input Transcriptic generates checkbox UI elements as seen below:

protocol ui screenshot

Inputs need to be added for each of the parameters you reference in the python protocol.

Directory Structure

Be sure to arrange the protocols in a file structure similar to that as recommended by Transcriptic:

protocols/
  manifest.json
  requirements.txt
  my_protocols/
    __init__.py
    sample_protocol.py

The manifest.json needs to be at the top level of the directory tree, so at least a level up from the python protocol.

Testing

To test the protocol using Transcriptic Runner in the same directory as the manifest.json run:

$ transcriptic preview AssayName

If all is working properly it dumps the JSON autoprotocol to STDOUT. Other wise you will get an error either due to the manifest.json or python protocol having errors. Keep fixing errors until you get the JSON dump, I found using pylint useful for fixing basic syntax errors in my python.

After that you can bounce it off of the Transcriptic servers by piping the JSON to analyze:

$ transcriptic preview AssayName | transcriptic analyze

I found it useful to create a run from the preview as I like to use the run UI on the Transcriptic web app to quickly scan through the run to make sure the protocol is doing what I want it to do.

$ transcriptic preview AssayName | transcriptic submit --project ":project_code" --title "Test Run" --test

If the run looks good on the Transcriptic web application it's time to package it up.

Uploading releases to Transcriptic

Transcriptic has a really easy way of uploading packages of protocols to the server. In the directory create a .zip archive from the manifest.json and the directory containing the python protocol. Name the .zip release_someVersionNumber.zip in line with what version number is specified in the manifest.json.

Next login to the Transcriptic web app and click 'Manage' for your organization. Then click the 'PACKAGES' tab. From here click 'Create New Package'. Name the protocol, give it a short description then upload the .zip file for the package. Click 'Save & Analyze'. Under the 'RELEASES' you will want to click publish to ensure other members of your organization can use the protocol.

Whenever you add improvements to the protocol, bump the version number in the manifest.json, zip up all the files again and upload the new version, then hit publish! I think this release management is a really nice feature as version tracking of protocols is essential for experimental repeatability.

Summary

All in all the process of packaging up a protocol was not too difficult and I think it goes a long way to make the Transcriptic platform more widely accessible to all researchers, not just the ones that can write python. The ability to make protocols public is awesome as well. You could write a paper, publish it on PLOS, and reference your protocol on Transcriptic. Then when people want to try your technique, all they need to do is find your protocol on Transcriptic and use their samples! Think of what this will do for repeatability...

I had the pleasure of meeting Tali, Max and Dorothy-Lou from Transcriptic at SynBioBetaUK this week. They are all extremely smart and super friendly people, I highly recommend you get in touch with them if you are interested giving Transcriptic a try with your research.

Twemoji is coming

At Dentally we were initially hesitant of using Twitter regularly because we wanted to use the platform to provide value rather than just self promote, which is all over Twitter.

But I'm getting distracted this post is actually about Twemoji!

Slack does Twitter well, and I like how relaxed they come across as a company and I think the use of emojis is in concert with that. But what I like more than emojis are Twitters own Twemojis, I just prefer the colours and design, so I now have Twemoji support here all thanks to Twemoji Awesome which is a really easy way to integrate them into your site.

Go forth and Twemoji...

My first attempt at working with Autoprotocol

Recently I wrote about my first experience of running a simple experiment on Transcriptic's cloud biology platform. It was a simple bacterial growth curve. The protocol for running this experiment was produced by interacting with a GUI to enter parameters and the actual knitty-gritty of liquid handling and spectroscopic measurements was already predefined.

The growth curve protocol had been written as a package, which is simply a package of code that when connected to the Transcriptic web application generates a user interface that can parametrically generate the commands that are executed by the platform. This is a great way of handling protocols as it allows easy execution of experiments by users that have no experience with code.

Transcriptic accepts protocols defined by the Autoprotocol standard (also designed by Transcriptic). So another method of executing experiments on Transcriptic is to write a protocol in the Autoprotocol standard and submit this to Transcriptic via the API for execution. This is what I had a quick go with.

autoprotocol brand

Example protocol, the burden assay

First I needed a protocol to turn into the Autoprotocol standard JSON format. I picked a protocol from 'Quantifying cellular capacity identifies gene expression designs with reduced burden' a paper from the Ellis group and co. The specific protocol is the spectroscopic analysis of a fluorescent reporter recombinant DNA system transformed into cells. The experiment is designed to assess the burden of the gene cassette on the host.

Writing the protocol

Autoprotocol protocols can be quite lengthy due to the granularity of specifying each liquid handling step etc. When working with 96-well or 384-well plates one could end up with a lot of repetition. To make the construction of protocols simpler Autoprotocol provides a python package that can programmatically output Autoprotocol JSON.

Containers

I first started by defining references in the protocol, these are essentially the containers that are used in the experiment, be they existing containers with reagents or containers to be created where for instance the assay will take place.

When instantiating a container, you need to supply a few arguments mainly ID, container type and container destiny (where the container ends up at the end of the run).

import json
from autoprotocol.protocol import Protocol

#instantiate new Protocol object
p = Protocol()

# Add the containers to the the protocol, unfortunately I had to pick slightly different containers to the ones used in the paper.

# The protocol assumes I already have a stock of already transformed bacteria and of arabinose

bacteria_stock = p.ref("bacteria_stock", cont_type="micro-2.0", storage="cold_4")
bacteria_overgrow = p.ref("bacteria_overgrow", cont_type="96-deep", discard=True)

inducer_arabinose = p.ref("inducer_arabinose", cont_type="micro-1.5", storage="cold_4")
reaction_plate = p.ref("reaction_plate", cont_type="96-flat", storage="cold_4")
bacteria_prep = p.ref("bacteria_prep", cont_type="96-deep", discard=True)

After adding all the containers the protocol actions kick off with growing a fresh liquid culture from a bacterial stock. To achieve a culture of bacteria in an exponential phase of growth.

Liquid handling and culturing bacteria

# Should be dispensing M9, but M9 media isn't a standard reagent at Transcriptic
# Dispense fills the container with standard reagents from Transcriptic
p.dispense(bacteria_prep,
            "lb-broth-100ug-ml-amp",
            [{"column": 0, "volume": "1500:microliter"}])

# Add bacteria from stock container to fresh media
p.transfer(bacteria_stock.well(0).set_volume("1000:microliter"),
           bacteria_prep.well(0),
           "5:microliter")

# Cover the plate prior to shaking incubation
p.cover(bacteria_prep, lid="universal")

# 16hr incubation
p.incubate(bacteria_prep,
           "warm_37",
           "16:hour",
           shaking=True)

# Prep media for overgrowing bacteria sample
p.dispense(bacteria_overgrow,
            "lb-broth-100ug-ml-amp",
            [
              {"column": 0, "volume": "1000:microliter"}
            ]
          )

p.uncover(bacteria_prep)

# Innoculate overgrowth sample
p.transfer(bacteria_prep.well("A1"),
           bacteria_overgrow.well("A1"),
           "20:microliter",
           mix_after=True)

p.cover(bacteria_overgrow, lid="universal")

# Incubate bacteria to guarantee exponential phase
p.incubate(bacteria_overgrow,
           "warm_37",
           "1:hour",
           shaking=True)

p.uncover(bacteria_overgrow)

# Transfer exponential phase bacteria to microplate for the assay
p.distribute(bacteria_overgrow.well("A1").set_volume("1000:microliter"),
             reaction_plate.wells_from(0,4),
             "200:microliter"
             )

OD600 and fluorescence measurements with an arabinose induction step

After the bacteria has been cultured to be in an exponential phase of growth the culture is transfered to another plate where the spectroscopic assay will take place.

During the assay, measurements are made of the OD600, fluorescent emission at 528nm and fluorescent emission at 645nm. These measurements occur twice prior to the expression system being induced by arabinose. Then following induction the 3 spectroscopic measurements are taken every 30 minutes, 8 times.

As an aside I'm pretty sure this code can be cleaned up a lot to remove so much of the repetition.

## Assay time!

# Incubate bacteria at 37 degrees for 3 hours
p.cover(reaction_plate, lid="universal")
p.incubate(reaction_plate, "warm_37", "3:hour",shaking=True)

# Read the first four wells on the reaction plate.
p.absorbance(reaction_plate, reaction_plate.wells_from(0,4).indices(), "600:nanometer",
    "OD600_reading_post3hr")
p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="485:nanometer", emission= "528:nanometer", dataref=
        "528_reading_post3hr")
p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="590:nanometer", emission= "645:nanometer", dataref=
        "645_reading_post3hr")

# Incubate bacteria at 37 degrees for 30 mins
p.incubate(reaction_plate, "warm_37", "30:minute",shaking=True)

# Another measurement
p.absorbance(reaction_plate, reaction_plate.wells_from(0,4).indices(), "600:nanometer",
    "OD600_reading_post3hr2")
p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="485:nanometer", emission= "528:nanometer", dataref=
        "528_reading_post3hr2")
p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="590:nanometer", emission= "645:nanometer", dataref=
        "645_reading_post3hr2")

# Incubate
p.incubate(reaction_plate, "warm_37", "30:minute",shaking=True)

# Measurement
p.absorbance(reaction_plate, reaction_plate.wells_from(0,4).indices(), "600:nanometer",
    "OD600_reading_preinduce")
p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="485:nanometer", emission= "528:nanometer", dataref=
        "528_reading_preinduce")
p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="590:nanometer", emission= "645:nanometer", dataref=
        "645_reading_preinduce")

p.uncover(reaction_plate)

# Induce the expression system with arabinose
p.distribute(inducer_arabinose.well(0).set_volume("1000:microliter"),
             reaction_plate.wells_from(0,4),
             "100:microliter")

p.cover(reaction_plate, lid="universal")

# Note that creating the 8 time series measurements from here can be done with a single while loop. The count variable is used in the dataref assignment
count = 0
while count < 9:
    # Incubate
    p.incubate(reaction_plate, "warm_37", "30:minute",shaking=True)

    # Measure
    p.absorbance(reaction_plate, reaction_plate.wells_from(0,4).indices(), "600:nanometer",
        "OD600_reading_" + str(count))
    p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="485:nanometer", emission= "528:nanometer", dataref=
            "528_reading_" + str(count))
    p.fluorescence(reaction_plate, reaction_plate.wells_from(0,4).indices(), excitation="590:nanometer", emission= "645:nanometer", dataref=
            "645_reading_" + str(count))
    count +=1

Once the run finishes all the containers execute their 'destiny' which is either being discarded or returned to storage.

Getting the protocol onto Transcriptic

After the protocol has been written this can all be 'built' to JSON in the Autoprotocol format with this line in the python protocol:

# Builds the Autoprotocol JSON
print json.dumps(p.as_dict(), indent=2)

Executing the file with python burden.py will dump the JSON into STDOUT however this output can be piped into other functions.

I used the Transcriptic Runner package to first validate the protocol then create a test run via the API with the following command from the docs:

$ python burden.py | transcriptic submit --project ":project_id" --title "Burden Assay" --test

If the PUT request on the API works the run appears with all of the actions and containers interpreted into the UI:

burden screenshot

Once the run is logged against the project it is easier to go through the UI to check the run than going through the JSON from python file.

I haven't tested the run on any materials as I don't have any of the strains or the plasmids in my inventory, but it would be cool try and replicate some of the results from the paper.

I'll try and work on wrapping the burden assay protocol in a harness which makes the protocol flexible by accepting parameters via UI.

The documentation between Transcriptic and Autoprotocol is really decent and helpful. In particular analysis and validation with the Transcriptic Runner was a big helper in eliminating small errors in the protocol. One common thing was setting 'virtual volumes' where containers will have a volume of liquid in sometime in the future.

Looking forward to trying to grab a quick word with the people from the Transcriptic team at SynBioBeta next week at Imperial College.

Ben