Why I unfollowed @scienmag

So here’s the thing. A few people have been tweeting sciencey-looking articles into my timeline of late, which come from the website scienmag.com which bills itself as Science Magazine, and the Twitter account of the same name, @scienmag with the name Science.

I recently stopped following this account for a number of reasons, chief amongst them being that they are clearly trading off the name of the well-respected scientific journal/magazine Science. The latter, published by the AAAS famously uses the web address http://www.sciencemag.org and the Twitter handle @sciencemagazine, as in the early days of the commercial internet for one reason or another, science.com was already taken, and similarly, in the early days of Twitter @science was quickly snapped up. 
Now most scientists I know do not get the two confused, but every now and then, even quite prominent scientists forget and RT articles from Scienmag. In fact when I started on Twitter one of the accounts I started to follow was @scienmag, thinking it was the real deal. I soon learnt it wasn’t but was too lazy to unfollow them. 

But the main reason that I dislike @scienmag, and the final prompt for me to unfollow is that it seems to take science stories from various sources, often press releases about new research, and publishes them verbatim or near enough on their website, but annoyingly nearly never provides the link to the actual research article in question.
Why would they do this? Well it all comes down to the commercial web page principle of eyeball residence time. Basically websites that make their money selling advertising space never want you to leave their site by anything but a sponsored ad (usually Google ads). In fact, the only links off the page you may have arrived at are ones going to other Scienmag stories, or ads. What we as scientists want to do of course is get to the actual science as fast as possible, so we would likely just click on the actual journal link and leave their webpage behind. Thus creating a very low eyeball residence time, and therefore less ad dollars for them.
Take this example that crossed my Twitter timeline this evening. It was a retweet of a quote tweet that linked to a Scienmag page about survivors of pediatric Hodgkin lymphoma studied at St. Jude Children’s Research Hospital and their long-term disease burden.

This is the Scienmag page.

And as you can see it looks like an original magazine article, complete with quoted scientists that you might be led to believe were actually interviewed by the journalist. But there is no link to the Lancet Oncology article under discussion. 
So I went to the St Judes web page and looked around for their press release on this subject and found this, that is strikingly similar to the “story” at Scienmag:

Paragraphs extracted from the Scienmag site (left) and the original St Judes press release (right)

Except that this time, there actually is a link to the real published article at Lancet Oncology. 
Hmmm. Unfollow. 

Posted in Uncategorized | 2 Comments

Pymol and very large PDB files. The Zika Cryo-EM structure as a case study

One of my major research interests is the Flavivirus group of viruses. In our work at the University of Queensland we’ve been involved in developing inhibitors of viral proteins associate with Dengue and West Nile viruses. Particularly inhibitors of the NS2B/NS3 protease and the surface E-protein [1,2,3]. A key way to target these proteins is to examine their X-ray crystal structures. A newer technique of determining the structure of proteins is cryo electron microscopy (Cryo-EM).

I found myself on Thursday looking at the Cryo-EM structure of the entire Zika virus, published recently by the Kuhn group [4]. Zika is in that same class of Flaviviruses and is therefore of great interest to me not only because of the recent flurry of media commentary and public health concern surrounding current outbreaks. The Kuhn group had previously published Cryo-EM structures of the Dengue and West Nile viruses as well.

I wanted to take a closer look at the structure so I used my favourite visualisation program Pymol to take a look. After a bit of fiddling, which I’ll go into below, I got to this lovely picture of all the surface proteins of a Zika virus particle, with all the chains rendered as cartoon helices and sheets etc.



Zika virus surface proteins via Cryo-EM


I was quite please with the result and posted it to Twitter. I next made a Quicktime movie direct from Pymol of the whole picture spinning 360° over 4 seconds, and posted it to YouTube. The result is a bit pixelated because I just used the default YouTube compression settings which reduced the 28MB Quicktime file to an 808KB Youtube video.

A few kind people commented on the structure which has a lovely symmetry to it but my interest was piqued by one Twitter user Jonas Boström (@DrBostrom) who was interested in whether I’d be happy to share the viral assembly as a single PDB file so he could look into making a VR version. Sure I said, and went back to Pymol and first myself tried to save it as a PDB and VRML2 file, which is one of Pymol’s export features. Some time later I had a 54MB PDB file and a, wait for it, 2.93GB .wrl file. Not really the size of files you want to pop into a tweet! Even when I gzipped the PDB file it was 12MB. But there was a problem. Before I go into the problems and the techniques required in Pymol to get the assembly just so, I need to fill you in on a little of the background to these surface proteins.

The surface proteins of the viral particles are a mixture of the E and M proteins arranged in a regular pattern; 360 proteins are arranged in an icosahedral shell. If you go to the PDB page for this structure and download the PDB file 5ire.pdb what you are getting is the asymmetric unit which consists of just 6 chains A-F. Three repeats of the E-protein (A-C), and three of the M protein (D-F). There are 60 of these subunits in the Biological assembly giving the total of 360 total viral proteins at the surface 180 E, 180 M.

So lets just take a look at this subunit for a bit. This smaller more manageable chunk of the overall virus surface is the best one to use if all you are interested in is the way these proteins interact with each other, or if you’re interested in the important Asn154 glycosylation site (boxed in the figure, one glycan for each of the E-protein chains).


The 5ire monomer contains 3x E- and 3x M-proteins


But if you want to visualise the whole viral surface proteins you want to download the “biological assembly” file, which when unzipped runs to 54MB.


RCSB download dialog gives both monomer and biological assembly options



5ire assembly as it opens in Pymol initially as 60 states


If you decompress and then open this file in Pymol, you will see at first just the one subunit, as above, but note that it has been loaded as 60 states (boxed in the figure). You can cycle through these with the play button but we want to visualise them all at once. To do this we use the split_states command at the Pymol command line. This splits the multi-state file into 60 new objects. In this case I have given each new object the Zika prefix. You can now delete the original multistate object for neatness. You will probably need to click the zoom button to get them all into the window (or type zoom at the command line).

split_state 5ire_assembly, prefix = zika

dele 5ire_assembly




after split_states you can visualise the whole assembly at once


Now we can see all the subunits all at once and things start to get a bit more tricky. If you have a reasonably modern Mac or PC you should be fine unless you try and make some fancy surfaces, in which case you might find your machine chugging a bit. But for now the first thing you should do is save the session as whatever file name you want in case the next few steps cause Pymol to crash. Pymol adds a fair bit of overhead to these session files so you’ll end up with something about 210MB in size.

You might want to experiment with turning the cartoon representation on too at this point. This is also the point at which I made that short spinning video I mentioned earlier. The next thing you will probably want to do is save the resulting assembly as a single PDB file so you don’t have to repeat this process, or perhaps you want to offload the file to a different modelling package, like Jonas Boström suggested via Twitter.

Fortunately Pymol lets you export/save multiple objects into a single PDB file. The simplest way to do this is via the “select” function. At the Pymol command line type:

select *

Everything should now be highlighted. Now choose File-> Save Molecule and in the dialog box scroll down and chose “sele” as the object to be saved, then give it a file name, and wait…

In our case the resulting file is about 54 MB but there is a big problem. To demonstrate, open the PDB file in a text editor and scroll down. I used Smultron and you have to wait a fair bit as it’s going to get very laggy. Anyway if you scroll down far enough you will discover a problem I had not encountered before. Pymol doesn’t seem to be able to export a PDB file more than 99999 atoms long properly. This file contains over 660000 atoms, and every one past 99999 is numbered 99999.


Pymol gets stuck at numbering atoms after No 99999



Even the heteroatoms (Glycans in this case) have atomID 99999



And the CONECT records are horrid


This as you would expect is going to cause problems, not least when we get to the CONECT records part of the PDB file. Behold the ugliness. However, all the xyz coordinates are legit. Let’s see what happens when we load this big PDB file back into a new Pymol session. Whoops, see all those extra long bonds? That’s trouble.


Some long wonky bonds there courtesy of those CONECT records


Fortunately CONECT records are not completely necessary in a PDB file if you just want to do simple visualisations. So in a text editor it is a simple matter to remove them all. The resulting edited PDB file can then be read back into Pymol without all those nasty extra lines, but…


Wonky CONECT records removed

…there’s a new problem. This export-import routine introduces some weird forgetfulness about what secondary structural elements the protein contains. So Pymol does nothing when you try and show a cartoon representation of the proteins. The normal thing to do in these circumstances is to run the util.ss from the Pymol command line. I did this and got the famous Apple spinning beachball of death. I stuck it out however (put the cursor in the “do not sleep” corner and went and made coffee) and eventually Pymol came back to life. But it was a long wait (20 minutes). This is a pretty good place to point out that this was all done in Pymol 1.6 on a 2011 iMac (2.7GHz, Intel Core i5) running 10.10.5 with 8GB RAM. But the process eventually ended up a failure. Despite showing all the right messages in the log box, no secondary structural features could be obtained.

 util.ss: initiating secondary structure assignment on 103680 residues.

util.ss: extracting sequence and relationships…

util.ss: analyzing phi/psi angles (slow)…

util.ss: finding hydrogen bonds…

util.ss: verifying beta sheets…

util.ss: assignment complete.

Save: Please wait — writing session file…

So the process is not yet complete. But the good news is that I have a PDB file that doesn’t make wonky bonds. The bad news is that I still have >500000 atoms with atomID 99999. Clearly this job is a bit beyond Pymol’s current abilities. I shall keep you posted. Once I have some more functional files I may put them in a public Dropbox if anyone wants them, as they’re still a bit too large to email.


[1] “Potent Cationic Inhibitors of West Nile Virus NS2B/NS3 Protease With Serum Stability, Cell Permeability and Antiviral Activity.” Martin J. Stoermer, Keith J. Chappell, Susann Liebscher, Christina M. Jensen, Chun H. Gan, Praveer K. Gupta, Wei-Jun Xu, Paul R. Young, and David P. Fairlie, J. Med. Chem. 2008, 51(18), 5714-5721. Full text via ACS publications

[2] “Structure of West Nile Virus NS3 Protease: Ligand Stabilization of Catalytic Conformation.” Gautier Robin, Keith Chappell, Martin J. Stoermer, Shu-Hong Hu, Paul R. Young, David P. Fairlie, Jennifer L Martin J. Mol. Biol. 2009, 385(5), 1568-1577. Full text via ScienceDirect.

[3] “In silico screening of small molecule libraries using the dengue virus envelope E protein has identified compounds with antiviral activity against multiple flaviviruses” Thorsten Kampmann, Ragothaman Yennamalli, Phillipa Campbell, Martin J. Stoermer, David P. Fairlie, Bostjan Kobe, Paul R. Young Antiviral Research, 2009, 84(3), 234-41. Full text via ScienceDirect.

[4] The 3.8 Å resolution cryo-EM structure of Zika virus.
Sirohi D, Chen Z, Sun L, Klose T, Pierson TC, Rossmann MG, Kuhn RJ.
Science, 2016, 352(6284), 467-70. Pubmed Link.

Posted in Chem, Chem_Comp, mac, Pymol, Uncategorized | Tagged , , | 2 Comments

Who was the hardest Bond? A Chemical Perspective

Opinions vary wildly on the internet indeed as elsewhere about who was the best James Bond. From die-hard Sean Connery fans to those who swoon over a buff Daniel Craig. Today I take a look at the Bond phenomenon. Not to arbitrarily comment on who was the best, but rather to focus on who was the hardest. And to help me I am turning to science. More specifically, the Moh hardness scale.

Chronologically Bond has been portrayed by: Sean Connery, David Niven, George Lazenby, Roger Moore, Timothy Dalton, Pierce Brosnan, and Daniel Craig.

So by extracting chemical elements from their surnames we arrive at Sean Connery as Cobalt with a Moh hardness of 5.5. Next comes David (Nickel) Niven who is a bit softer with a Moh hardness of 4. George (Lanthanum) Lazenby is softer still at just 2.5. Roger (Molybdenum) Moore is unexpectedly hard at 5.5.

Timothy Dalton presents a bit of a challenge as there are no Da elements though the name itself is very apt in this context. However if we cheat just a little and skip one character we can call him Timothy (Aluminium) Dalton, which has a Moh hardness of 3.

Pierce Brosnan also requires us to cheat a bit as Bromine is a liquid and doesn’t get Moh hardness number (but is a well hard element in its pure form let me tell you!). So we’ll get a bit creative and call him Pierce (Bronze) Brosnan, which gives him a modest hardness of 3.

But the undisputed winner in this contest is the current Bond, Daniel (Chromium) Craig with a Moh hardness of 8.5

Daniel Craig image via Wikipedia

That’s pretty clear to me.

So in descending order:

Bond Actor Mohs Hardness
Daniel Craig 8.5 (Chromium)
Sean Connery 5.5 (Cobalt)
Roger Moore 5.5 (Molybdenum)
David Niven 4 (Nickel)
Timothy Dalton 3 (Aluminium)
Pierce Brosnan 3 (Bronze)
George Lazenby 2.5 (Lanthanum)


Posted in Uncategorized | Leave a comment

A small personal story on non-open science hypocrisy

<Minor late night typo edit and update. This story took place more than 13 years ago, before the current big push towards open data really took off>

A few years ago we published a paper on the NMR solution structures of some small peptides. I won’t post the link to the the actual paper because it’s not really relevant to today’s story.

NMR solution structures of short peptides are  funny sort of things. Unlike small molecule crystal structures, where on publication you’re expected to lodge the structures into a database like the CCDC – Cambridge Crystallographic Database, or protein structures where you’re expected to lodge the structures in the PDB (and most journals require you to do so along with publication), small peptide NMR structures are not allowed in the PDB. Currently the minimum length accepted is 24 residues. The reason for this is that small peptides often adopt a range of structures in solution, and hence the PDB moderators reason that such structures are often too imprecise to go in a scholarly database. There are points to be made on both sides of this debate and I acknowledge that this is a valid concern. Frustrating, as we’re about to see, but I gladly accept their decision. One alternative is to publish the NMR structures into a third database, the BMRB – Biological Magnetic Resonance Bank. This repository allows the upload of NMR constraint data as well, so people can in theory at least reproduce published results, or at least run the same data through their preferred structure calculation program.

The paper we published in did not require us to include the structures in the Supporting Material and so we didn’t. The molecules were also covered by a patent issued to the University and the industrial sponsor of the research, complicating matters slightly. Prior to publication we had been contacted a few times by competing groups in the field asking us if we had NMR structures for these peptides and if they could get access to them. Due to the emerging patent situation we declined. After publication those requests dried up. However one of our competitors did publish a paper a couple of years later, publicly berating us for not releasing the data into the public domain. They explicitly commented that by not releasing the data we were actively hindering the field of research.

The crux of their “new” paper was  basically “we had to repeat the work and here is the same structure.” The ONLY point of novelty of their “work” was that they claim to have identified under the same conditions as us, the presence of a second small population conformer of the peptide.

They were wrong. If they had read our other papers on the subject they would have known that the synthesis of this molecule occasionally generates a small amount of epimerisation at the C-terminus of the peptide. What they were seeing as a “new” conformer was in fact merely a diasteomeric impurity. The compound itself is now available from commercial vendors. I know that early batches were plagued with this impurity at abundances up to 50%. We’ve always made our own.

Anyway they went off in a huff and “published” their “new” “public” structure. And guess what, they NEVER published the coordinates or even the constraints files. Not in BRMB. Not in PDB. Not anywhere. Nice.

<update: I double checked again this evening and their data is still not in BRMB where they intended to put it>

Posted in Chem, literature, NMR | Leave a comment

Twitter deliberately makes it hard to remove even trivially obvious imposter accounts. My experience.

Over the course of the last several months, I have been involved in a protracted and ultimately futile attempt to have impersonator accounts shut down on Twitter. I noticed back in October that around September 2014, someone had scraped my Twitter account and created two accounts with minor spelling errors of my surname (Case 1: MartinStoemerr, Case 2: MartinStoermre), and were using them to send Twitterspam. They had stolen my custom profile header photo (one of mine of Eibsee, in Bavaria in winter), as well as my brief Twitter profile at the time, my location in Brisbane, and job description, and the link to my blog on WordPress. One of them subsequently seems to have changed my location to Melbourne but left the location as Brisbane in the bio. They also stole my custom avatar, a buckminsterfullerene that I had made in Pymol, with the colours of my football team, the Brisbane Roar.

Now this is a fairly minor case of impersonation I know, certainly nothing like the abuse that some people get online, but I stuck at the reporting process partly because it’s the right thing to do, and partly because I wanted to see what Twitter’s complaints process is like. Well I’m here to report that it’s a mess. I am going to take you through it blow by blow, just so that you can see how tedious they deliberately make this in an effort to make you just go away. The core problem that I can see is that either the two cases were dealt with probably by two different people using different interpretations of the rules, or someone negligently merged the two cases and marked them as both complete, when one in fact was not.

Either way the first thing that will happen to you if you report that someone is impersonating you is that Twitter will respond with a boilerplate email saying that you have to provide a Government issued PhotoID to prove you are who you say you are. This is a gross invasion of privacy. You want me to email you my PASSPORT or DRIVER’S LICENSE? What is the point of that? What are you going to check it against? Does Twitter have access to the relevant Australian government agencies to check these? Of course not. Could they even prove that what you sent them wasn’t in fact a neat Photoshop job? Of course not. This is just a simple case of putting you off in order to MAKE YOU GO AWAY.


5 Days later you will get an additional email entitled Case #99999999: Friendly reminder [ref:_Quardleoodleardlewardledoodle:ref], reminding you that you haven’t acceded to their demands to have your privacy eroded further.

Case2FriendlyredactedHere is where my two cases diverged. I sent identical replies to both Case ID’s that I was not going to provide them with photoID for the above reasons and that a cursory examination of the account creation dates should easily establish priority.

Case 2 was dealt with with no more than that, and Twitter shut one of the offending cases down. Case 1 however was not. And somehow that case was marked as complete. A few days later I noticed that the second account was still online (and still is). I reported this, and asked when the second account would be dealt with. Twitter replied that the case was now closed and that I had to start all over again with opening a new case file.

Case2RedactedCompletedPic – Case2RedactedCompleted.jpg

Which I did, only to, of course, get the same boilerplate response as I originally got. Frustrated, I sat it out, only to get the “Friendly Reminder” email again. Subtext from Twitter: SUBMIT OR GO AWAY

So I sent a very lengthy response reiterating that I was not going to send them any PhotoID, and outlined the whole scenario to that point (including copies of all the previous emails and reasoning), making sure to point out that they had already found one of my case files to be completely meritorious, so why were they asking me all this again.


Case3PhotoIDdemandPic – Case3PhotoIDdemand.jpg

I reply once again, pointing out again that one case was dealt with already easily and asking why the second case cannot be dealt with in the same way. Chronicling of course all the case IDs along the way (for those who have lost count, Case 1 has become Case 3)

I get the same response, without PhotoID proof Twitter will be doing nothing. Apparently either my emails go straight to a bot reply service or they are digging in their heels.

I decide to up it a bit, asking them to escalate my request and provide a valid answer. Also mention that I am preparing this blog post. Stay tuned.

jerefusePic – jerefuse.jpg

Fast forward to February 2016, and I send a followup email to the case ID asking why no further action has been taken. And instantly comes the reply that the case has been closed and that I need to submit a new impersonation case to @support. LATHER, RINSE, AND REPEAT. Except no I’m not going to do it. I’m just going to publish this post detailing their ineptitude and intransigence. Likely they will do nothing either. But rest assured I am writing it all down for future reference.

3monthslaterPic – 3monthslater.png

3monthslaterreplyPic – 3monthslaterreply.png

So what are these accounts? Opinions vary, but a common theme is that if you look closely at these imposter bot accounts, nearly all their “followers” are also bots. They also go dormant for extended periods, then get reactivated. This has been suggested as a tactic to game Google rankings for websites and things like Klout scores without being automatically tagged as being a bot by autoblockers.

Posted in Uncategorized | Leave a comment

When is sharing your or your labs research progress on Twitter oversharing?

One of the joys of Twitter for me is seeing people getting excited online when they get a paper accepted. Or a conference invitation. These are great things to short via a quick tweet. Sometimes, but not always, there are congratulatory replies, which adds to our sense of science Twitter community. There is also of course the opposite as well. The depressed “paper rejected” tweet. Again, we can all sympathise and you feel a sense of community again.

But the “accepted yay!” tweet doesn’t get your paper read by anyone who doesn’t follow it up with you. So it seems sensible to add a link to the paper, either in the actual journal, or on a preprint server like arXiv.

But when to do it? So let us consider a couple of different scenarios.

Scenario 1: You’re a grad student or postdoc.

It’s seems like overkill to send a series of tweets that are likely to annoy people.

Tweet 1: I’m so excited my paper got accepted!

Tweet 2: ICYMI, my paper is now in the “Just Accepted” list. Link: http://dev/null

Tweet 3: ICYMI, my paper is now in the “Papers In Press” list. Link: http://dev/null

Tweet 4: ICYMI, my paper is now in print! With page numbers and everything! Link: http://dev/null

So which of the above do you find acceptable, and which cross over into oversharing? Would you just hold off tweeting at all and just wait until the paper is online and has a permalink? It takes away some of the spontaneity I guess, so I personally would just do 1 and 3 (but I really don’t like ICYMI tweets). Also your tweet should say something about what your paper is about, space permitting.

Scenario 2: You’re a PI

I’d consider these two tweets a reasonable compromise between oversharing and giving due credit.

Tweet 1: Blurry Sunday AM tweet. Paper accepted! Yes! Win!

Tweet 2: Congratulations to our grad student ******, who had their synthesis paper published by JACS today. Link: http://dev/null/ASAPs

Tweet 3: It’s been a great month in the ME lab. 4 papers in press, and 2 conference presentations. Link, link, link (or link back to PI web page publications list with placeholder anchor)

It might just be me, but PIs tweeting about their H-index, and number of page views etc seems a bit off.

Scenario 3: You’re the Departmental/Institute Social Media person

Tweet 1: New paper from the Sparklybutt Group out now in Nature: http://dev/null. (NOTE: link should either be to the paper itself or a press release if appropriate)

Tweet 2: Sparklybutt Group Nature paper highlighted Here, here, and here! Link, link, link!

Scenario 4: University PR units tweeting “showcased research”

Tweet 1: $Department Sparklybutt Group’s Nature paper has been highlighted here, here, and here! Link, link, link. (NOTE: Press releases should contain links to the paper itself)

Meek opinion

All of the above is just my opinion obviously, and is just about Twitter. Using your Groups website or Blog to get more readers is another story entirely. So what do you consider to be  too much oversharing of papers, and how much is Goldilocks?


Posted in Chem, literature, Twitter, Uncategorized | Leave a comment

Reflections on lab protocols 1. Cleaning and drying NMR tubes

How do you clean and dry your NMR tubes? Myths abound in chemistry laboratories.

Your definition of "clean" might change depending on whether you re running 100 micrograms of a peptide or natural product (600's left and right) or a concentrated sample of something synthetic on the walkup (400 middle)

Your definition of “clean” might change depending on whether you
are running 100 micrograms of a peptide or natural product (600’s left and right) or a concentrated sample of something synthetic on the walkup (400 middle)

Firstly a bit of a caveat about what I mean by “clean”. A natural product chemist wanting to get maximum possible resolution and best possible lineshape for running spectra on the 100 micrograms of compound they isolated will likely use the best quality tubes they can afford, cleaned with the utmost care. If they are especially well funded, the may well use a brand new tube. If you’re a medicinal chemist collecting data for 1 intermediate in the synthesis of a 50 compound screening library you’re likely a bit less fussy. Sure, your final tested compounds well get the extra treatment so as to have a nice PDF of a proton to stick in your supporting information, and you’ll likely want to minimise residual solvent peaks. But my point is, there is no one set goal here.

There’s been lots of discussion over the years about best practice for cleaning tubes but I’m particularly interested in what comes next: The drying of them. This post is a bit back to front in that I’m going to put my own personal cleaning protocol at the end and jump straight to the drying bit. There was also this paper on bulk cleaning tubes published in OPRD that attracted a lot of attention online. Some for, some against. clearly it struck a nerve. We do this A Lot. Unless you are rich and genuinely do use the econo tubes as disposables (shudder).

So lets assume you have a slightly damp but clean NMR tube. What comes next? What about putting them in the lab oven to dry? Many moons ago, in fact last century, I as an undergraduate (as an Honours student under the Australian system), was told from on high that the three commandments of NMR tube maintenance were:

“Thou shalt not put NMR tubes in the oven, for they will bend! And they shall never spin again!”
“If though must put them in the oven, lay them gently down, so the bending be minimised”
“Thou shalt not steal a lab colleagues clean NMR tubes from the lab oven”

But the budding scientist in me shouted quietly: “Why?, Where’s the evidence?” No evidence or references were ever provided. Is this a laboratory myth, perpetuated through the decades? So just recently I went looking for evidence.

There are several academic labs with guides for NMR users who warn against tube warping/deformation from ovens, for example, Oxford, Temple, UMich, UCSD, and Guelph. Also tube vendors themselves discourage heating: Wilmad, and Norell. Novell say you shouldn’t go above 60°C.

I next put out a call on Twitter for people’s opinions on this and I must thank everyone who responded – see a list below. It seems that in the main people are pretty happy putting them in reasonably hot (~100° C) ovens, although in one case flattening of a tube in a hotter (~120°C) oven has been seen. Most report that laying the tubes down is the preferred option. Low boron glass having a lower melting point was also mentioned.

All in all, I’m prepared to call bunkum on the lab oven issue, I just don’t believe that they will bend under normal lab oven conditions (<100°C). I’ve never seen a tube that couldn’t spin because of it. That’s usually just dirt on the outside from crappy cleaning technique. And besides, nearly all of my high field NMR since about 2000 has been non-spinning anyway. The other frequent problem that I’ve seen in some labs is that nothing in the lab gets stolen faster than clean glassware left in the oven. Just putting your name on a beaker hoping that people will respect it is sadly, often not enough. But of course if you heat tubes up like this over many cleaning cycles, and you have an unusually hot oven, you may eventually see some deformation, but I doubt it. Likelier is that your tube will be lost, broken or stolen long before that eventuality. On the other hand, putting your tubes in the oven may put them at extra risk of being accidentally broken by falling/rolling out if you use the lay down method.


So I use a variety of techniques depending on how urgent my need for a fresh clean dry tube is, and yes, I will use the oven. But mostly I don’t. I should point out that I am not the worlds best example to cite of a chemist doing good laboratory practice. Typically I would have around 50-80 dirty tubes at a time. So after my bulk cleaning method (see below) the bottom line is that I usually let them dry in the fumehood until they’re dry to look at. Then if I’m going to be in the lab for a while and can keep an eye on the oven, I’ll put them in, take them out, let them cool and lock them in my office filing cabinet.

If I’m in a rush I have been known to put a dozen into a vacuum schlenk flask on a manifold line attached to an oil pump. This is a good technique as casual theft is less likely and can get your residual solvent levels on the glass walls down pretty quickly. On other occasions if I’m really pushed for time, a long blunt ended needle carefully attached to a Luer lock line on a nitrogen manifold or better, flexible HPLC polyethylene tubing and a few minutes N2 flow can get the job done, just be careful you don’t launch your tube across the room! Also even using a blunt ended needle like this can scratch tubes which leads to a breakage risk. As someone who has had to clean the probe on the walkup NMR many times due to broken tubes, this is to be avoided!

If you’re still reading and wonder how I do bulk NMR tube cleaning read on. But if all you’re interested in is “Do tubes bend”, then the take-home message is no, at sensible lab oven temperatures, and (maybe) laying them down, they’ll be fine.


Aldrich NMR tube cleaners

There are many ways of cleaning tubes but for what it’s worth here’s my standard protocol. [I also won’t go into details on how to remove stubborn stained tubes with things like fuming nitric or aqua regia either. Your local safety rules might well come into play here as well]. We currently* use these vacuum cleaning devices from Aldrich. Usually model B in a B24 quickfit fitting, attached to a standard 500 mL vacuum filter flask.

They’re a bit expensive but worth it if you have lots of tubes to clean. You stick and NMR tube cap on the rounded end and put them in the vacuum apparatus and pull wash solvent through them. They are a bit breakable however, the small central glass capilliary is prone to snapping off if you drop a tube in under vacuum rather than placing it in.

My normal protocol for cleaning NMR tubes is to first soak them overnight in acetone standing up in the fumehood. The next morning these are quickly drained off and then each tube is rinsed in the above apparatus with about 4 good squeezes of the Nalgene acetone wash bottle, and placed in another clean beaker to the side. After all tubes have been done I then rinse down the outside of all the tubes quickly, just bunched up together to remove any stray drops of sample residue.

I then replace the acetone-filled filter flask with a fresh one and repeat with de-ionised water, same 4 squeezes etc. Unless I’ve been running a lot of peptides I usually skip the step of washing the outsides with water.

I then replace the water-filled filter flask with a fresh one and repeat with acetone. I then remove all the caps from the bottom of the tubes and rinse the NMR tube butts with a little bit of acetone, putting all the rinsings into a beaker, with all the caps (these get another round of washing later).

Brisbane can get damp...

Brisbane can get damp…

And here it gets tricky. you now have pretty clean tubes, but in climates like Brisbane they’re going to get condensation on them pretty fast. But more importantly, it’s time to ask yourself, “how clean is my acetone?” because we use so much of it for cleaning purposes, we often save $$$ by buying Tech grade. This often contains residues that you’d rather not have in your samples. Obviously if you’ve been reasonably thorough in washing your tubes you don’t want to compromise that buy putting other organics back in.

So once again it’s time to ask yourself, “what’s my sample worth?” If the next sample is going to be a medchem intermediate then I’d say you’re nearly done. But if you’re a natural product or peptide/protein person, you’ve got a ways to go. Personally, and this is just me, at this point I put each tube normal way up in a beaker and put a couple of mL of AR grade diethyl ether into each, and quickly drain them, and stand them upside down into the final beaker. This gets pushed to the side in the fumehood and left until the end of the working day. Assuming your fumehood isn’t full of clouds of noxious acids, they’ll be fine.

At the end of the day the clean and pretty much dry NMR tubes are taken out of the hood, in fact, out of the lab, and locked in my office filing cabinet. As required they return to the lab to be used and the cycle starts over. Or as I said above, in urgent cases, vacuum or nitrogen flush methods are useful.

And finally thanks to those who contributed to this conversation on Twitter: @pinkyprincess, @NotHF, @SuperScienceGrl, @BrandonDoughan@sarahdcady@lauravlaeren@tom_wilks@stephengdavey@TimEasun@AttoSci@lgamon@RealTimeChem@timwbergeron@ChemistCraig@volatilechemguy,

*The astute reader will also note that this post is written in a strange mix of past and present tense. Journalistic laziness on my part, or merely the writings of someone who is still coming to terms with no longer being an active researcher?

Posted in Uncategorized | 1 Comment