Anecdotal qNMR

Anecdotes On Quantitative NMR From The Early 1970ies

by Joseph Ray, Naperville, IL

I first looked at quantitative NMR in the early 1970’s, when I was working for the Standard Oil Company (later Amoco and then BP). Instrumentation was a Varian CFT-20 spectrometer that was dedicated to 13C observation at 20 MHz. The spectrometer had only a 16k data table, and so the real spectrum contained only 8k data points with a digital resolution of 20 x 200/8k ~ 0.5 Hz.

The problem was the determination of the aromaticy of various crude oils because this was an essential parameter used by the chemical engineers in optimizing their models for running the refinery. At the time, they were estimating this parameter by from a variety of physical properties such as density, refractive indices, or trying to determine it by clever uses of 1H NMR, mass spec and even IR. It was evident that 13C NMR would measure exactly what they wanted; the percent of aromatic carbons (%CA), but there were questions about whether it was quantitative.

It was well known that one had to wait a minimum of 5 T1’s to allow 99.3% recovery of the slowest relaxing spin, and that the decoupler could only be turned on during acquisition to avoid non-uniform NOE enhancements. We determined that a delay of 60 s was adequate to obtain quantitative data. This may seem too short since quaternary carbons have T1’s that certainly exceed 12 s. However most of these crude oils had paramagnetic particles in them that probably helped shorten T1’s.

Given the above, determining the aromaticity should be easy because the aromatic carbons (100 and 170 ppm) were well separated from the non-aromatics (60 and 0), and 1H NMR showed that any olefins (that would also fall in the 100 to 170 region) were negligible. One simply had to integrate two broad regions and calculate %CA. However, there was one more problem that had to be corrected – the base line smile. The “smile” was an artifact caused by the response of the Butterworth filters.

The filters were responsible for two problems – decreasing some signals near the ends of the spectrum and introducing the smile. The first problem was simple to fix. The practice at the time was to set the filter bandwidth so that a signal at the edge of the spectrum was attenuated by 3 dB. We always set the filter to 1.5 x sw, which resulted in a removing the filter attenuation of intensity at the spectral edges. The second problem was more difficult. There was a delay of the order of 1/sw that was used to allow the coil to recover from the pulse before data sampling. A delay on the order of 10 ms was usually short enough to accomplish this, but at a typical sw of 4000 Hz (200 ppm) the actual delay was 250 ms. It turned out that the Butterworth filters that were used at that time would go from a zero response to a linear response in time on the order of 1/sw. Unfortunately they would overshoot the linear response before settling down. As a result the first data and second data points would be distorted. A dislocation of the first point would lead to a DC offset that was easily corrected in the transformed spectrum but the dislocation of the second point resulted in the ”smile”, a 0.5 Hz sine wave through the entire spectrum. By adjusting the delay time between the end of the pulse and the beginning of the acquisition, one could turn the smile into a frown. This was accomplished because one went from overshooting the linear response to undershooting it as far at the second data point was concerned. One then selected the delay that gave the flattest baseline. This was critical to obtaining quantitative data because there were no baseline flattening routines at that time. If the baselines were not flat considerable manipulation using slope and bias adjustments were required and this involved additional undesirable subjective involvement of the operator.

The chemical engineers were not easily convinced that this new NMR data was something that they wanted to plug into their models, which were responsible for operating multi-million dollar refineries. So the solution was simple – make up some samples that were made from pure organic materials of known structures to emulate a crude oil and demonstrate that the technique was indeed quantitative. We selected 30 organic compounds typically found in petroleum, carefully weighed each one and prepared blends that had aromaticities of about 10, 25 and 50% to emulate the range of expected aromaticities in crudes. We obtained 13C spectra by pulsing once a minute for 16 hrs to average about 1000 scans. Much to my disbelief none of the %CA’s were within 5% of the values that mixtures were known to have.

The problem was fairly simple and I think an overlooked key to doing good quantitation in many cases today were higher fields are available. Crude oils contain thousands of compounds many of which are quite similar. Crudes also have paramagnetics at low levels. Both of these factors result in broad lines. My “synthetic” crudes had highly resolved spectra. When I applied line broadening of 20 Hz the spectra of the synthetic crudes now looked exactly like those of the real crudes and in all cases the %CA’s we determined were within 0.5%. The problem was how computers integrate. Basically the integral over n data point data points is the sum of the intensity of each data point. As was pointed out above, the number of data points on the early computers was limited to 8k real points. Therefore, lines with widths of less than a Hz were represented by only two or three points. By line broadening to 20 Hz this corrected the under-representation and the results showed that 13C nmr is indeed quantitative.

Soon we were running triplicate analysis to improve the precision. Since each run was 16 hours, this limited us to 2 crudes per week, and we needed to be able to increase our throughput. This was done by taking an old Varian HA-60 spectrometer and converting it to a Nicolet TT-14. This instrument simply kept the wide gap 14 kgauss iron magnet and replaced the Varian CW (continuous wave) console with an FT console. But the real gain was made because the wide gap magnet could accommodate a 20 mm probe! The cft-20 used a 10 mm probe, so doubling the diameter resulted in a 4-fold increase in volume of nuclei and a 16-fold decrease in time. We could now run a sample per hour (60 scans with a 60 s delay), and still doing triplicate analysis, we could run 3 samples a day.

Next came the use of Cr(acac)3. A 0.08 molar solution of Cr(acac)3 had been shown to decrease T1’s dramatically and at the same time level NOE’s. Now we could run samples with a delay (acquisition and relaxation) of around 3 s. So we increased the number of scans from 60 to 180 and obtained better results in 30 minutes - still running triplicate analyses. Finally, we moved to a 200 MHz wide bore supercon equipped with a 20 mm probe. I can’t remember the gain in sensitivity over the TT14, but I do remember the salesman and I doing a calculation based on the sensitivity specs that predicted that we could get the same S/N in a time that would be equivalent to ¾ of a scan! With the supercon, we ran spectra for 20 minutes in triplicate until we found that our triplicate runs were giving variations of less than 0.1%. So we changed to running a single 30 minute acquisition simply because we were having trouble keeping up with changing a sample ever 20 minutes. Ultimately we added a samples changer, but we kept the 30 minute run times because we tripled sample throughput from 8 to 24 hours per day.

See also Joseph Ray's Key parameters for quantitation.