Skip to content

“a very disturbing HARRY_READ_ME.txt file”

November 22, 2009

Good notes on source code by a blogger here/ Also here

66 Comments leave one →
  1. Calvin Ball permalink
    November 22, 2009 9:51 pm

    This, I think, sums it up pretty well:

    Asimov
    Posts: 21296
    Incept: 2007-08-26

    Ok, one thing still to say. After reading 4000 lines of this now, I actually feel sorry for the guy. He’s trying his damnedest to straighten out somebody else’s mess.

    NOT something to crucify him for.

    I sure would like to know what happened to Tim Mitchell and why he wasn’t around to explain all his undocumented **** to. And why he didn’t document it.

    And why such a ****ty programmer was running this.

    And several other things, but still, my main point is:

    Harry didn’t make the mess, he’s trying to clean it up. So don’t think TOO bad of him. I really do feel sorry for him now, and there’s a good chance that some of the things I’ve noted above have been fixed now….

    I’m up to sometime after 2007 now in the file. He’s pasting data from then right where I’m stopping. This isn’t old. It started in 2006 or so.

    Asimov
    Posts: 21296
    Incept: 2007-08-26

    Pika: I don’t think that’s the intention of any of these people. They’re just scientists fighting for funding for their projects. Some of them might belong in that category, but I doubt a whole lot of them do.

    It’s the politicians that you need to focus on (as normal…)

    Odd how often they come up as bad guys. Wonder why that is?

  2. kuhnkat permalink
    November 22, 2009 10:35 pm

    Anyone know where Dr. Tim Mitchell went that he couldn’t be consulted on the code??

  3. November 22, 2009 10:38 pm

    So do you think there is going to be any talk of this at the AGU meeting in December?

  4. AKD permalink
    November 22, 2009 10:56 pm

    Perhaps raptured?

    http://www.e-n.org.uk/p-1129-Climate-change-and-the-Christian.htm

  5. November 22, 2009 11:52 pm

    I’ve also been trying to raise awareness of HARRY_READ_ME.TXT with comments on various high-traffic sites. The reality is that they were just trying things and if it looked right they took it. No formal software validation, not even any formal software development processes, hand-tuned data files scattered all around. It also seems that they were trying to go back after the fact and were having difficulty re-creating some published graphs.

    If you had this level of software quality in a medical device the FDA would close you down in a heartbeat.

  6. November 23, 2009 12:21 am

    Steve,
    I’ve long admired your work.

    I look forward to a very interesting and careful/detailed analysis of that Harry file from you.

    I’m sure you’ll find no end of useful information in it. I certainly hope so.

    The day that science does not involve others checking ones work is the day it should no longer be called science.
    Steve

  7. Dishman permalink
    November 23, 2009 1:38 am

    I’ve been pawing through the file myself for a while.

    As a software guy, I find it really disturbing.

    I haven’t found any references to a specification. Maybe I’m missing them.

    Basically, this software doesn’t qualify as “tested”. It just produces output that looks right:

    It’s not complete yet but it already gives extremely helpful information – I was able to look at the first problem (Guatemala in Autumn 1995 has a massive spike) and find that a station in Mexico has a temperature of 78 degrees in November 1995!

    There’s also indication that the data has been manually fiddled:

    I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more.

    Is this CRUTEMP he’s talking about?

    People trust this?

  8. November 23, 2009 2:06 am

    Steve

    I don’t know if this is something you already have but here is a good bibliography from Yamal from 1998

    Original Filename: 907975032.txt | Return to the index page | Permalink | Later Emails
    From: Rashit Hantemirov
    To: Keith Briffa
    Subject: Short report on progress in Yamal work
    Date: Fri, 9 Oct 1998 19:17:12 +0500
    Reply-to: Rashit Hantemirov

    Dear Keith,

    I apologize for delay with reply. Below is short information about
    state of Yamal work.

    Samples from 2,172 subfossil larches (appr. 95% of all samples),
    spruces (5%) and birches (solitary finding) have been collected within
    a region centered on about 67030’N, 70000’E at the southern part of
    Yamal Peninsula. All of them have been measured.

    Success has already been achieved in developing a continuous larch
    ring-width chronology extending from the present back to 4999 BC. My
    version of chronology (individual series indexed by corridor method)
    attached (file “yamal.gnr”). I could guarantee today that last
    4600-years interval (2600 BC – 1996 AD) of chronology is reliable.
    Earlier data (5000 BC – 2600 BC) are needed to be examined more
    properly.

    Using this chronology 1074 subfossil trees have been dated. Temporal
    distribution of trees is attached (file “number”). Unfortunately, I
    can’t sign with confidence the belonging to certain species (larch or
    spruce) of each tree at present.

    Ring width data of 539 dated subfossil trees and 17 living larches are
    attached (file “yamal.rwm”). Some samples measured on 2 or more radii.
    First letter means species (l- larch, p- spruce, _ – uncertain), last
    cipher – radius. These series are examined for missing rings. If you
    need all the dated individual series I can send the rest of data, but
    the others are don’t corrected as regards to missing rings.

    Residuary 1098 subfossil trees don’t dated as yet. More than 200 of
    them have less than 60 rings, dating of such samples often is not
    confident. Great part undated wood remnants most likely older than
    7000 years.

    Some results (I think, the temperature reconstruction you will done
    better than me):

    Millennium-scale changes of interannual tree growth variability have
    been discovered. There were periods of low (5xxx xxxx xxxxBC), middle
    (2xxx xxxx xxxxBC) and high interannual variability (1700 BC – to the
    present).

    Exact dating of hundreds of subfossil trees gave a chance to clear up
    the temporal distribution of trees abundance, age structure, frequency
    of trees deaths and appearances during last seven millennia.
    Assessment of polar tree line changes has been carried out by mapping
    of dated subfossil trees.

    According to reconsructions most favorable conditions for tree growth
    have been marked during 5xxx xxxx xxxxBC. At that time position of tree
    line was far northward of recent one.
    [Unfortunately, region of our research don’t include the whole area
    where trees grew during the Holocene. We can maintain that before 1700
    BC tree line was northward of our research area. We have only 3 dated
    remnants of trees from Yuribey River sampled by our colleagues (70 km
    to the north from recent polar tree line) that grew during 4xxx xxxx xxxx
    and 3xxx xxxx xxxxBC.]
    This period is pointed out by low interannual variability of tree
    growth and high trees abundance discontinued, however, by several
    short xxx xxxx xxxxyears) unfavorable periods, most significant of them
    dated about 4xxx xxxx xxxxBC. Since about 2800 BC gradual worsening of
    tree growth condition has begun. Significant shift of the polar tree
    line to the south have been fixed between 1700 and 1600 BC. At the
    same time interannual tree growth variability increased appreciably.
    During last 3600 years most of reconstructed indices have been varying
    not so very significant. Tree line has been shifting within 3-5 km
    near recent one. Low abundance of trees has been fixed during
    1xxx xxxx xxxxBC and xxx xxxx xxxxBC. Relatively high number of trees has been
    noted during xxx xxxx xxxxAD.
    There are no evidences of moving polar timberline to the north during
    last century.

    Please, let me know if you need more data or detailed report.

    Best regards,
    Rashit Hantemirov

    Lab. of Dendrochronology
    Institute of Plant and Animal Ecology
    8 Marta St., 202
    Ekaterinburg, 620144, Russia
    e-mail: rashit@xxxxxxxxx.xxx
    Fax: +7 (34xxx xxxx xxxx; phone: +7 (34xxx xxxx xxxx
    Attachment Converted: “c:eudoraattachyamal.rwm”

    Attachment Converted: “c:eudoraattachYamal.gnr”

    Attachment Converted: “c:eudoraattachNumber”

  9. GaryC permalink
    November 23, 2009 2:09 am

    Anybody who is trying to duplicate Harry’s work with the IDL code, first, you have my sympathy.

    Second, if you don’t have access to IDL, it is at least worth trying to use the open source IDL-compatible package GDL. Here is a link to the home page for the project.

    http://gnudatalanguage.sourceforge.net/

  10. The Blissful Ignoramus permalink
    November 23, 2009 2:56 am

    Further on the CRU documents – has anyone checked out the tellingly titled “Extreme2100.pdf”? All looks suspiciously like cherry-picked “Yamal ‘extreme’ tree rings”, to a mere ignoramus like myself.

  11. AndyL permalink
    November 23, 2009 3:08 am

    So now we have the real reason why climate scientists would not “free the code”, and it’s something suspected on CA for a long time.

    Embarrassment.

  12. November 23, 2009 4:14 am

    If you look up Harry at CRU, under his name it says:

    “Dendroclimatology, climate scenario development, data manipulation and visualisation, programming”

    I wonder if CRU might like to rephrase that…

  13. November 23, 2009 5:41 am

    What is the ‘decline’ thing anyway? It is in a lot of code, seems to involve splicing two data sets, or adjusting later data to get a better fit. Mostly (as a programmer), it seems like a ‘magic number’ thing, where your results aren’t quite right, so you add/multiply by some constant rather than deal with the real problem. Aka “a real bad thing to do” : ).

    \FOIA\documents\osborn-tree6\mann\mxdgrid2ascii.pro

    printf,1,’Osborn et al. (2004) gridded reconstruction of warm-season’
    printf,1,'(April-September) temperature anomalies (from the 1961-1990 mean).’
    printf,1,’Reconstruction is based on tree-ring density records.’
    printf,1
    printf,1,’NOTE: recent decline in tree-ring density has been ARTIFICIALLY’
    printf,1,’REMOVED to facilitate calibration. THEREFORE, post-1960 values’
    printf,1,’will be much closer to observed temperatures then they should be,’
    printf,1,’which will incorrectly imply the reconstruction is more skilful’
    printf,1,’than it actually is. See Osborn et al. (2004).’

    \FOIA\documents\osborn-tree6\briffa_sep98_d.pro

    ;mknormal,yyy,timey,refperiod=[1881,1940]
    ;
    ; Apply a VERY ARTIFICAL correction for decline!!
    ;
    yrloc=[1400,findgen(19)*5.+1904]
    valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
    2.6,2.6,2.6]*0.75 ; fudge factor

    \FOIA\documents\osborn-tree6\mann\mxd_pcr_localtemp.pro

    ; Tries to reconstruct Apr-Sep temperatures, on a box-by-box basis, from the
    ; EOFs of the MXD data set. This is PCR, although PCs are used as predictors
    ; but not as predictands. This PCR-infilling must be done for a number of
    ; periods, with different EOFs for each period (due to different spatial
    ; coverage). *BUT* don’t do special PCR for the modern period (post-1976),
    ; since they won’t be used due to the decline/correction problem.
    ; Certain boxes that appear to reconstruct well are “manually” removed because
    ; they are isolated and away from any trees.

    \FOIA\documents\osborn-tree6\combined_wavelet_col.pro
    ;
    ; Remove missing data from start & end (end in 1960 due to decline)
    ;
    kl=where((yrmxd ge 1402) and (yrmxd le 1960),n)
    sst=prednh(kl)

    \FOIA\documents\osborn-tree6\mann\oldprog\calibrate_correctmxd.pro

    ; We have previously (calibrate_mxd.pro) calibrated the high-pass filtered
    ; MXD over 1911-1990, applied the calibration to unfiltered MXD data (which
    ; gives a zero mean over 1881-1960) after extending the calibration to boxes
    ; without temperature data (pl_calibmxd1.pro). We have identified and
    ; artificially removed (i.e. corrected) the decline in this calibrated
    ; data set. We now recalibrate this corrected calibrated dataset against
    ; the unfiltered 1911-1990 temperature data, and apply the same calibration
    ; to the corrected and uncorrected calibrated MXD data.

    \FOIA\documents\osborn-tree6\mann\oldprog\maps12.pro
    ;
    ; Plots 24 yearly maps of calibrated (PCR-infilled or not) MXD reconstructions
    ; of growing season temperatures. Uses “corrected” MXD – but shouldn’t usually
    ; plot past 1960 because these will be artificially adjusted to look closer to
    ; the real temperatures.
    ;

    \FOIA\documents\osborn-tree6\mann\oldprog\pl_decline.pro

    ;
    ; Now apply I completely artificial adjustment for the decline
    ; (only where coefficient is positive!)
    ;
    tfac=declinets-cval
    fdcorrect=fdcalib
    for iyr = 0 , mxdnyr-1 do begin
    fdcorrect(*,*,iyr)=fdcorrect(*,*,iyr)-tfac(iyr)*(zcoeff(*,*) > 0.)
    endfor
    ;
    ; Now save the data for later analysis
    ;
    save,filename=’calibmxd3.idlsave’,$
    g,mxdyear,mxdnyr,fdcalib,mxdfd2,fdcorrect
    ;
    end

    \FOIA\documents\osborn-tree6\summer_modes\pl_decline.pro
    ;
    ; Plots density ‘decline’ as a time series of the difference between
    ; temperature and density averaged over the region north of 50N,
    ; and an associated pattern in the difference field.
    ; The difference data set is computed using only boxes and years with
    ; both temperature and density in them – i.e., the grid changes in time.
    ; The pattern is computed by correlating and regressing the *filtered*
    ; time series against the unfiltered (or filtered) difference data set.
    ;
    ;*** MUST ALTER FUNCT_DECLINE.PRO TO MATCH THE COORDINATES OF THE
    ; START OF THE DECLINE *** ALTER THIS EVERY TIME YOU CHANGE ANYTHING ***

  14. November 23, 2009 5:56 am

    I wouldn’t be too hard on Harry. I read through that whole read-me tonight, and he has the job from hell (been a programmer for a long time, but that tops any horror stories I’ve lived through!). He is mostly trying to merge incompatible data sets that are inconsistent, partial, and undocumented. He is also working with code he inherited that seems pretty awful. He is making a lot of mistakes, but everyone does, just not everyone documents them truthfully in semi real time like that. Being honest about your process and retaining a sense of humor in that kind of task speaks a lot about the guy.

    The state of data collection there is pretty shocking though. The end result of all that will never be good — no surprise when things don’t match measurements later, just don’t shoot the programmer : ).

  15. Basil Copeland permalink
    November 23, 2009 7:20 am

    I’m sure this will all be clearer at some point, but what, if any, is the connection between “Mike’s Nature Trick” and the “decline” adjustments in the HARRY_READ_ME? What little I’ve read suggests to me that “Harry” was working to try to update the code for CRU TS. Is that the conclusion of any of the rest of you? That is different ball game than the paleo/divergence issue, isn’t it?

  16. Dean permalink
    November 23, 2009 8:45 am

    “There are no evidences of moving polar timberline to the north during
    last century.”

    Amazing!!!

  17. Dean permalink
    November 23, 2009 8:49 am

    What is this code supposed to do? Do we know how it fits into the overall GCM codes (if it does at all)? Is it just a code that manipulates the data?

  18. bender permalink
    November 23, 2009 8:51 am

    Maybe Phil should spend less time jetting around the world and more time making sure his poor programmers don’t inherit nightmare legacy code. Needles-in-your-eyes awful.

  19. ncmoon permalink
    November 23, 2009 9:23 am

    Yes, I think Harry has been told to try and replicate their current datasets and especially the gridded output databsets.

    This starts with station data from GHCN and other places, which gets processed (to make .cts files). These then get merged into a ‘database’. The database consisting of .dtb/.dts and other files. This isn’t database, in the sense that a programmer in the 21st century might think of a database. we aren’t talking realional or SQL here. Hell there isn’t even any sign of indexing. This is how universities did computing back in the 1970s. data came in on cards or tapes, and you’d process them to produce a new pile of cards or output on tape.

    To produce the gridded datasets, the .dtb file data gets converted to some text files. These can then be read in by some IDL scripts. IDL is a higher level llanguage than Fortran, has lots of special graphics stuff. And most usefully has a routine for triangulating data. This IDL routine is what does the interpolation between station data points to generatee the gridded data points.

    All of this, is explained by Tim in the readme’s. Harry seems to be so offended by the fact that a readme starts with an underscore, that he doesn’t actually read them.

    It’s my guess that there are no professional software developers at CRU. Nor are there any professional data librarians or archivists. The code and data just exists, a communal pile of junk. If a postgraduate needs something they just have to go and code it for themselves. In practice, in this environment, some poor sod gets a reputation as the person who know how to make the computers work. This was someone called Mark, then Tim got lumbered, and now young Harry has had the baton passed to him.

    what yo uhave to consider is that CRU isn’t a governemnt agency, say like the office of national Statistics or the met Office. They are basically a university department. Status for a university comes in terms of PhDs produced, papers authored and so on. Money isn’t going to be taken away from that, in order to fund someone managing the data properly.

    And anyway, PhD students are a hell of a lot cheaper than professioanl software developers 🙂

  20. November 23, 2009 9:24 am

    BTW I’ve uploaded the readme in sections starting here – http://di2.nu/foia/HARRY_READ_ME-0.html – the last section (35) needs further splitting.

    Personally I found section 20 to be short and fascinating….

  21. November 23, 2009 9:48 am

    I should point out that, as with the others here, my main feelings regarding “Harry” are “Thank %deity% I didn’t have that job”.

    It is blindingly obvious that this code – which appears to be the magic blackbox that creates the HADCRU3 stuff – is a total mess. No wonder CRU didn’t want to show it to anyone.

  22. HĂ„kan B permalink
    November 23, 2009 11:42 am

    Was it Harry blowing the wissle? Is this his alibi?

  23. November 23, 2009 12:19 pm

    I wonder if Harry had been given the thankless job of trying to sort out the CRU data to try to get them asap into a better state for audit – ie knowing that future non-refusable demand for audit/transparency was possible.

  24. SineCos permalink
    November 23, 2009 2:17 pm

    How many times have these guys said that you don’t need the code because all the details of how to duplicate the work are in the published literature?

    This file shows that they can’t duplicate their own work from the published literature. And it also shows that they apparently don’t have all of their old code… so did they ever refuse to provide code rather than admit that they didn’t have it?

  25. tty permalink
    November 23, 2009 3:54 pm

    I can understand that HadCRU isn’t that keen on studying the climatic effects of changes in cloudiness. According to Harry the code to create gridded cloudiness data has been lost for good, and was undocumented, so they can’t recreate it. Up to 1995 they only have the results file. After 1995 they use a different procedure that calculates cloudiness from sunshine data.
    I suppose they could start from scratch, but it would look odd if they got a different result from the published one the next time around.

  26. HĂ„kan B permalink
    November 23, 2009 4:54 pm

    Just a comment, possibly someone has already noticed. In my downloaded file all documents in the mail folder has the timestamp 2009-01-01-06:00:00, same for some files in the documents folder, including README_HARRY.txt, does this lead us somewhere? I do see that the files in the document folder with this timestamp seems to be the most interesting, but maybe I’m just fooled by this observation. greenpeace.txt seems to be a mail which I can’t fint in the mails folder. I find it very hard to believe that someone actually was at work at UEA at this very moment.

  27. HĂ„kan B permalink
    November 23, 2009 5:53 pm

    An addition, I just had a look at the files in Windows, xp, there the timestamps are 2009-01-01-00:00, in my linux box 2009-01-01-00:06, an interesting difference. My timezone is CET, Stockholm Sweden.

  28. Alexander Harvey permalink
    November 23, 2009 6:48 pm

    “Gizza’ job!” “I can do that!” (Yosser Hughes, Boys from the Black stuff)

    Seriously it is a bit of a trip down memory lane.

    Real programmers don’t do comments, and they speak in Octal.
    Any comments that do exist would have been written because the author lacked confidence, didn’t know what he was doing, and would have been written first and then ignored.
    Some comments were useful. One that comes to mind: “You won’t understand this bit!”
    Trust the code, not the programmer.
    Compilers exist, so that you can bin the source code.
    If your program doesn’t do what it ought, run it on another computer, or in the middle of the night.
    But if it is an old object try to find the computer the programmer had hand-wired with that magic extra instruction.

    There are reasons why all that changed. But it was a hoot! Primarily the rise of the IT manager, the introduction of the career path, and the invention of specialities. The death of the programmer as renaissance man.

    I should like to say that the commentary in the readme, reads like something out of the 70s, but I guess it is a bit more up to date than that.

    It seems that he is re-working what was meant to be a one-off but had turned into a two-off, it is not software that was ever intended to handed over to operations. When you are done you are done.

    Unfortunately, stuff like that is never intended for the level of scrutiny it is now going to get. That does not make it bad per se. Maybe he will never get it to do what it did last time, but hey, maybe last time wasn’t right either.

    But the Big Question is: Fit for purpose?

    Well yes and no, the purpose has magically changed. It was when the output was of purely academic interest, but as part of a mission to change the world, sadly not.

    I very much doubt that, Harry’s situation is unique, or that his fubar post holocaust wreckage of a system is as bad as it could be. Also do not blame anyone for not being a professional. The original versions of all the great classic software systems were written by non-professionals. And I might say that during my career whenever any claimed to be a professional I reminded him/her that we did not belong to a professional body, we could not be struck off, defrocked, or court martialed, and that when we walked away we just left the faint echo of jangling spurs.

    From what little I have read, he does know what he is doing, in that he does understand the objective. That is very 1970’s 80’s, pick people who understand the problem over people who know nothing but computing. It used to be said like this: “If I need to explain it all to you, it would be quicker to do it myself.” His is not a huge software project, it is a man and his dog effort. Also it is not necessarily something that can be pondered about too much in advance. It is just an old fashion can of worms that must be swallowed one at a time. To his credit he has provided a commentary, a narrative that could be of help the next time. If the next programmer bothers to read it, which he/she may not.

    So all in all, I can only see this as a lack of trust thing.
    Can someone working in this fashion come up with good results? Yes.
    Should his boss trust him to do so? Yes
    Should the big boss trust the boss? Yes
    Should a politician trust the big boss? Yes
    Should we trust the politician? ….

    Err NO!

    So with the introduction of the last link, a lack of trust cascades back down the line, and the little guy, gets the red face, and the rest of them get to point a lot of fingers both up and down the chain. So who is to blame for their distress? Well, who brought this house of cards down?

    Well I think that is obvious.

    Steve, take a bow.

    Many Thanks

    Alex

  29. HĂ„kan B permalink
    November 23, 2009 7:51 pm

    All very true, I really feel for this guy Harry, he didn’t invent this he’s the guy who was appointed to carry on someones else work, someone who really seems to have messed things up. To make it worse he’s in the wrong bussines, what other bussines would demand a steady, yearly growth from the it system, ‘Hey we had a revenue of 500 million bucks last year and have had a steady 5% increase for the last 10 years, the new system has to keep up with that!’ Poor Harry!

  30. andrew permalink
    November 24, 2009 10:13 pm

    1062618881.txt

    Read it all

  31. Gary Luke permalink
    November 25, 2009 2:01 am

    Some of the numeric suffixs on filenames handled by Harry might give a clue to their dates. Near the start, the files in the directory beginning with ‘+’ are tmp.0311051552 – that’s almost 4pm on the 5th Nov 2003.

    Almost halfway through the file –
    WELCOME TO THE DATABASE UPDATER
    Writing vap.0710241541.dtb
    and the next attempt at a run
    Writing vap.0710241549.dtb

    Reading other filenames, Harry was working on this patch on the 24th Oct 2007, using a master database from 18th Nov 2003 and an update from 11th Sept 2007.

  32. Fai Mao permalink
    November 25, 2009 2:02 am

    This actually looks like programmer who was given faulty data and was told to make it fit.

    In any case the AGW people are in deep trouble because now it is clear that they were lying.

    I wonder if Harry was the hacker? Maybe he got tired of dealing with obvious lies and exported all of this.

  33. BruceL permalink
    November 25, 2009 5:04 am

    I have referred this (the HARRY file) to the UK PM’s office for comment and also for a comment on Monbiot’s call for re-analysis and the head of Jones. I doubt I will get a reply [or will end uplike poor Dr Kelley.]

  34. Gary permalink
    November 25, 2009 8:40 pm

    Another interesting comment from Harry

    “Oh, sod it. It’ll do. I don’t think I can justify spending any longer on a dataset, the previous version of which was
    completely wrong (misnamed) and nobody noticed for five years.”

  35. makomk permalink
    November 26, 2009 8:42 am

    Robin Debreuil: the decline is a decrease in the temperature calculated by measuring tree growth since 1960 or so. The fudges appear intended to bring it in line with actual measured temperature, which has not declined.

  36. December 5, 2009 3:31 am

    http://www.geog.ox.ac.uk/staff/mnew.html

    Think i found Mark

  37. June 14, 2011 8:57 am

    This is my first time i visit here. I found so many entertaining stuff in your blog, especially its discussion. From the tons of comments on your articles, I guess I am not the only one having all the enjoyment here! Keep up the good work.

  38. September 8, 2011 9:16 pm

    Hi I wanted to post a comment here for you to tell you just how much i actually Enjoyed reading this read. I have to run off to work but wanted to leave ya a quick thought. I saved you So will be back again following work to go through more of yer quality posts. Keep up the quality work.

  39. January 5, 2012 2:34 am

    Hello, i think that i saw you visited my weblog so i came to “return the favor”.I am trying to find things to improve my web site!I suppose its ok to use some of your ideas!!

  40. February 7, 2012 7:43 am

    I could bear ever interpreted why somebody has dealt so intensifierly with this event. Now give up to me full go. Genuinely interesting your thesis. Although my West Germanic language is non so expert, your posting I can understand. Hold out up the Sun.

  41. March 22, 2014 3:01 am

    We offer spices in appropriate packaging to ensure the safety of spices during storage and transportation and thus maintaining the natural color and flavor of spices.

  42. March 29, 2014 6:31 pm

    Hello friends, good piece of writing and nice arguments commented at
    this place, I am really enjoying by these.

  43. March 31, 2014 10:52 pm

    I am regular reader, how are you everybody? This post posted at this web page is genuinely good.

  44. April 17, 2014 11:42 pm

    I was recommended this blog by my cousin. I’m not sure whether this post is written by him as nobody else know
    such detailed about my problem. You’re wonderful! Thanks!

  45. April 18, 2014 6:36 am

    Wow, fantastic blog layout! How lengthy have you been running a blog for?

    you mmade blogging glance easy. The entire look of your web site is wonderful,
    let alone the content material!

  46. April 28, 2014 1:03 am

    I was curious if you ever thought of changing the layout of your
    blog? Its very well written; I love what youve got to say.
    But maybe you could a little more in the way of content so people
    could connect with it better. Youve got an awful lot of text for only having one or 2 pictures.
    Maybe you could space it out better?

  47. July 31, 2014 6:11 am

    Your style is very unique compared to other folks
    I have read stuff from. Thanks for posting when you
    have the opportunity, Guess I’ll just book mark this page.

  48. November 11, 2014 5:57 pm

    That is very interesting, You’re an excessively skilled blogger.
    I have joined your feed and stay up for searching
    for extra of your wonderful post. Additionally, I’ve
    shared your site in my social networks

  49. August 24, 2019 2:15 am

    create large youtube thumbnails for facebook use this https://youtubetofb.me/

Trackbacks

  1. Hacked Emails from GW Advocacy Group; "Hide the Decline" - Politics and Other Controversies - Democrats, Republicans, Libertarians, Conservatives, Liberals, Third Parties, Left-Wing, Right-Wing, Congress, President - Page 15 - City-Data Forum
  2. Republic Broadcasting Network » Blog Archive » Climategate Exposes the Alarmist Machine
  3. Climategate Exposes the Alarmist Machine Climate scientists funded by energy companies | MATADOR NINETY-FOUR
  4. Republic Broadcasting Network » Blog Archive » A Message to the Environmental Movement: Your Movement Has Been Hijacked
  5. Operation Mind Seed » Blog Archive » A Message To The Environmental Movement
  6. A Message to the Environmental Movement: Your Movement Has Been Hijacked | The Aperio Movement
  7. The CRU Climate Leak – 2 « Science and Language
  8. Climategate Exposes the Alarmist Machine « climategate.tv
  9. A Message to the Environmental Movement – Transcript « climategate.tv
  10. Mensagem ao movimento ambientalista
  11. A Message to the Environmental Movement | Sovereign Independent
  12. A message to the environmental movement « Anti Oligarch
  13. A Message to the Environmental Movement [HD] : The Corbett Report
  14. A Message to the Environmental Movement [HD] | CentralTRUTH
  15. A Message to the Environmental Movement [HD] | The International Reporter
  16. A Message To The Environmental Movement (VIDEO)Quantum Reality | Quantum Reality

Leave a comment