Thursday, March 15, 2012

QM/MM vs. QM-only studies of large cluster models

How large must a quantum model of an enzyme active site be to achieve optimum results? Proponents of the so-called "cluster model" argue that, most often, good results may be obtained even with small models (< 100 atoms). Fahmi Himo has repeatedly shown that fully including the first layer of aminoacids surrounding the reacting substrate (i.e. to about 150 atoms) yields results that are insensitive to the inclusion of a polarizable-continuum solvent field, and has concluded from these data that such models are sufficient to capture all the relevant enzymatic effexts on catalysis.

Walter Thiel has now published a QM/MM analysis of the reaction mechanism of acetylene hydratase (previously studied by Fahmi Himo using increasingly large QM-only models). Inclusion of the surrounding protein dramatically changed the results for the largest model studied by Himo, due to the absence (in the "cluster model") of two negatively charged phosphate groups adjacent to the active site. Although these charges are quite "shielded" from the active site because of neighbouring positively-charged amino acids, they originate local charge assymmetries that interact differently with the active site during each step of the catalytic cycle. This effect is quite similar to the major influence of the internal protein dipoles on enzyme catalysis expounded by Arieh Warshel, and should be kept in mind by all of us who tend to prefer the QM-only approach: a polarizable-continuum model assumes a homogeneous environment surrounding the QM system, and in proteins "it ain't necessarily so".

Tuesday, November 29, 2011

An interesting hypothesis on the selection of glucose as major fuel source in neurons

Earlier this year, I wondered why neurons preferentially use glucose as fuel. I have now found an interesting paper by Dave Speijer regarding this problem. He proposes the following reasoning to explain this observation:
  • reactive oxygen species are generated in large amounts by NADH dehydrogenase (complex I) when the amount of oxidized ubiquinone is limited
  • generation of large amounts of FADH2 increases the rate of reduction of ubiquinone, and therefore increases indirectly the amount of harmful radical species generated by NADH dehydrogenase
  • glucose oxidation generates a much smaller amount of FADH2 than fatty-acid oxidation. Therefore:


  • Especially vulnerable cells may be expected to have evolved a preference for glucose.

    Incidentally, neurons do seem to lack large amounts of one of the enzymes involved in fatty acid oxidation: thiolase.

  • The limits of homology modeling

    The computational prediction of three-dimensional structures of protein sequences may be performed using a wide variety of techniques, such as homology modeling or threading. In threading, the correct fold is searched for by evaluating the energy of the intended sequence when it is "forced" to adopt each of the known folding patterns. In homology modeling, one looks for a high-similarity protein sequence with experimentally-determined 3D structure, and mutates it in silico until the desired sequence is obtained. Many different programs and web-servers are now available for these tasks, differing among themselves in the forcefields used, alignment algorithms, etc. Performance is usually quite good when templates with similarity >40% are used.

    Recently, two small proteins with very high homology (>95%) but widely differing structure have been designed and studied. Starting from a pair of proteins with < 20 % identity and different 3D structures, the authors gradually mutated one sequence into the other, and ended up generating two sequences differing only in one amino acid, but with different folds. Attempts to unravel the precise mechanisms governing the selection of one fold over the other have however been inconclusive, because current molecular dynamics protocols and force fields are not accurate enough to measure the small energy differences involved.

    Monday, October 17, 2011

    Limitations of PCM

    A new paper claims to compute the pKa of nitrous acidium ion from gas phase DFT computations followed by estimation of solvation effects by a Polarizable Continuum Method (PCM). It is true that most often geometries do not change too much when going from gas phase to solution, but I strongly doubt the results are as accurate as they could be: PCM does not include the contribuiton from hydrogen bonds between the solute and the solvent, and I would expect that effect to be quite different in neutral HONO and protonated H2ONO+

    Thursday, September 29, 2011

    Dividing research into very small chunks...

    Research roductivity is most often measured by people who do not have the ability to distinguish good papers from bad papers. Such measurements therefore tend to devolve into mechanical algorithms that count the number of publications and the impact factor of the journal where the research was published, rather than sensible arguments about the merits (or demerits) of the researcher. Evaluating a researcher therefore becomes a "numbers games", where a researcher with a higher number of small papers easily outranks another who has a smaller number of longer, more complex, publications. The race to the "smallest publishable piece of research" increases the number of papers (arguably "good" to the researcher who needs a "good" evaluation) but makes accompanying the literature more difficult, as one has to keep track of ever increasing numbers of papers with dwindling individual importance. It also detracts from the value of research being reported: in my example today, two papers report computations of very similar compounds. The only difference is the interchange of a nitrogen with a phosphorus atom.
    A single paper would have been much more useful and important, but research managers would count that as less productive :-(


    PS: I happen to disagree strongly with the suggestion, in these papers, of the existence of intramolecular H-bonding, as the angles involved are too small for H-bonds.

    Tuesday, September 27, 2011

    What's in a name?

    The IUPAC distinguishes "Lewis acidity" from "electrophilicity": the first concept relates to the equilibrium constant of the reaction of an electrophile (i.e. the termodynamics), whereas electrophilicity is related to the rate constant (i.e. the kinetics) of the reaction. However, the actual usage of the words in ordinary chemical parlance is somewhat more ambiguous, as the concepts are often used interchangeably.
    A recent paper on this topic "Separating Electrophilicity and Lewis Acidity: The Synthesis, Characterization, and Electrochemistry of the Electron Deficient Tris(aryl)boranes B(C6F5)3–n(C6Cl5)n (n = 1–3)" caught my attention. However, this paper does not compare the changes in thermodynamics vs. kinetics ofthe title compounds upon increasing n. It rather compares their Lewis acidity with their ability to capture an electron (which the authors call electrophilicity). Quite a difference, don't you think?

    Coming soon to a worm near you....

    Three possible stop codons are common in mRNA: UGA, UAA and UAG. These codons usually bind release factors, that prompt the release of of the nascent amino acid chain from the ribosome. Some organisms, however, contain tRNA complementary to one of these codons. In these organisms, that codon no longer triggers the ending of the translation process, but codes an amino acid instead. Several researchers have used this special tRNA to develop mutant cells with expanded genetic codes.Greiss and Chin have now taken this a step further: they have engineered a mutant strain of the worm C. elegans that translates every UAG codon as an artificial aminoacid. It was a complex endeavour (details are in their paper...) that surely would have deserved a well-publicized press conference :-)