BBC, Voyager 1 is impressive

July 25, 2016

The BBC published an article with a paragraph that bugs me. It starts with:

The only spacecraft to have made it further than the planets, moons and asteroids of our solar system is Voyager 1.

To claim that Voyager 1 has traveled further away than all the asteroids is to claim that the Oort Cloud has none. Object 1996 PW suggests this is not the case. But if the comparison is limited to asteroids no further from the sun than the Kuiper belt, then Voyager 1 is only the farthest such spacecraft. Behind it is Voyager 2, Pioneer 10, and Pioneer 11. New Horizons hasn’t yet gotten past the Kuiper belt, but it’s on the way.

The paragraph continues:

At the time of writing, this plucky probe was 20,083,476,000 kilometres (12,479,293,426 miles) from Earth, travelling at some 17 kilometres per second. This sounds impressive until you remember that Voyager 1 was launched in 1977, is fitted with early ’70s scientific instruments, cameras and sensors and has been voyaging for almost 40 years.

How does a launch in 1977, and all that implies, make Voyager 1’s distance and velocity any less impressive? I don’t think it does at all.

Here is something that makes both Voyagers very impressive: they still function. Both spacecraft continue to make observations of the solar wind and are now seeing how it interacts with the wind beyond. The current prediction is that both will have enough electrical power to continue making observations and transmitting them until at least 2025, for almost fifty years of operation. How’s that for electronics and mechanical parts that haven’t been replaced or seen a mechanic in forty years, during which time they’ve been exposed to a good amount of ionizing radiation?

The Voyagers are very impressive feats of engineering. Operating for almost forty years only makes them more impressive. How many machines can do that without maintenance?

Weather Underground’s security issue

May 1, 2016
Malware download page

Image of the malware download page

I will often use the Weather Underground website to check the forecast, but I may start using the National Weather Service’s site instead. If I leave a tab on my web browser at work on the Weather Underground site, the image in this post will eventually result. The browser is an up-to-date Firefox without Adobe Flash running on Windows 7. It gets redirected to another website, always with a different domain name, and always with two seemingly random numbers in the path (visible in the image when large enough). The page looks like it is for downloading Adobe Flash, but isn’t on Adobe’s website. It sure stinks of malware. It may be coming from something like an advertisement that can sneak a redirection into the web page rather than content generated by Weather Underground, or maybe they have a more direct breach of security. Either way, I’m sure Weather Underground wouldn’t do this, but it is still annoying.

The issue has occurred five times over more than a month, maybe two, on the same computer. I did attempt to inform them of the issue, but I haven’t seen any indication that anyone took it seriously. It has happened twice since then.

At home, I run Firefox on Linux and do have Flash installed, although I usually have it disabled. The issue never happens there. I haven’t yet tried on another system without Flash, but suspect that may trigger the redirection.

What this doesn’t answer is what happens when this malware could redirect a browser, but finds one with Flash installed and enabled instead. I also didn’t accept this download. I’m not employed to do security research, and the IT department is quite distant.

A call to Comcast

December 31, 2015

I had reason to call Comcast today. It went something like this:

COM: You’ve reached Comcast customer service. This call may be monitored. Someone will answer whenever they get around to it, so wait. More blah . . .

COM: Is this you address?

ME: Yes.

COM: There is an upcoming fight. Would you like to hear about it?

ME: No.

COM: Please say yes or no.

ME: No.

COM: Two grown men will beat the crap out of each other for violent entertainment that costs money to watch. Do you want to pay for it?

ME: [Hang up since no means yes]

The only fight I’d like to see is between Comcast customers, armed with clown hammers, and their head of customer service. It would be the most brutal clown hammer fight ever.

I wonder how many programs I’ll have to pay for before I reach a person. Sounds like another reason to cancel service, but what little competition there is tends to be no better. Thanks FCC!

Useless toilet repair parts

November 22, 2015

It is a good thing that I didn’t leave earlier for Thanksgiving. I heard an odd sound from my bathroom today, followed by the sound of water spilling on the floor. I quickly found the problem and stopped the leak. A connector for the water supply to a toilet had fractured, allowing water from the incoming hose to escape.

Toilet water inlet parts on dirty old floor

In the image above is the hose with the fractured white connector, although the damage isn’t visible. I found that I have a replacement part from a toilet repair kit that I bought years ago (bottom left), but I can’t figure out how to remove the broken part. It seems like I would need to break the metal collar on the hose. I don’t know how to do that without breaking the hose, but even if I could, I would need a replacement for that, too. After searching the web for a while, it seems that the connector part is not replaceable; the whole hose needs to be replaced.

That begs the question: why was that replacement part included in the repair kit that I got? What am I missing?

Boost C++ library and the end of the world

November 8, 2015

While working on some C++ code, I made a mistake and got this error:

ERROR: Throw location unknown (consider using BOOST_THROW_EXCEPTION)
Dynamic exception type: boost::exception_detail::clone_impl
<boost::exception_detail::error_info_injector<boost::gregorian::bad_year> >
std::exception::what: Year is out of valid range: 1400..10000

I find two things rather interesting here. The first is that the Boost date_time library isn’t using the Boost exception library. The second is that the date_time library has defined the year 10000 as the last in the Gregorian calendar.

Based on this, I predict that the end of the year 10000 will be the end of the world. Using the Boost libraries to make end-of-the-world predictions should work about as well as using the Bible, right? The end is nigh!

Sigma 18-35mm f/1.8 autofocus is good

October 5, 2015

Just a quick follow-up to a previous post. I got a lot of use out of my Canon mount Sigma 18-35mm f/1.8 lens with upgraded firmware when my nephew Liam and my brother Jason came for a visit to Central Florida. I previously thought the autofocus wasn’t as good as it is with Canon lenses, but I was wrong. It is about as good as my Canon EF 85mm f/1.8 on the same camera, an EOS 70D. Neither lens gave me perfect autofocus, but I take extra pictures to minimize that trouble. I also used the continuous autofocus feature (Canon calls it “AI Servo” because marketing I suppose) a lot in an attempt to keep up with a four year old. The focus quality seems worse in low light than compared with focusing just once; this seems to be an issue with the camera. Overall, I’m really happy with the lens.

Potential change of NRC radiation exposure standards is something to care about

September 27, 2015

The United States Nuclear Regulatory Commission (NRC) is considering a change to the standards for allowable exposure to ionizing radiation. If you live in the US, this is something to care about since these standards affect what certain operations are normally allowed to put into the environment, and what is considered safe for the general public rather than just workers at such facilities as nuclear power plants. The NRC is taking public comments on the matter until November 19, 2015.

The standards for exposure start with a model to describe the health risk of a given exposure to ionizing radiation. The model used currently by the NRC and all other regulatory bodies in the world is known as the Linear No Threshold (LNT) model. This model has the premise that any exposure is bad, and the risk to health grows proportionally with the exposure. The alternative being considered is the radiation hormesis model. This model supposes that a small exposure has a positive effect on health, but too large an exposure is detrimental. After researching this over the course of a week and a half, I have decided that this isn’t a good change, but may not be as bad as it sounds.

Scientific research into how ionizing radiation affects the human body has produced some results that support the LNT model and some that support hormesis. There doesn’t seem to be a scientific consensus on the matter, so I haven’t found a good explanation for the discrepancies. There are scientists who argue passionately in support of one conclusion or the other, and they don’t always acknowledge evidence that contradicts their position. This leaves me uncertain about which model is closer to correct.

The NRC is responsible for establishing regulations that keep workers and the public safe. Since there is research supporting both models, the one that assigns greater risk to radiation exposure should be the one used. It will error more on the side of caution and suggest limits closer to the natural environment. The model that does this is LNT, the one currently in use.

The NRC was also asked in the petitions to raise the acceptable exposure limit for the general public to be the same as workers who deal with ionizing radiation on the job, such as people who work at nuclear power plants. Given that there is research supporting LNT, that people have varying susceptibility to cancer, and that some people undergo radiation therapy that may increase their susceptibility, this seems like a change that is a risky experiment in proving hormesis. Even if LNT was not well supported and hormesis was, it is not clear to me that the petitioners have considered people who may be more susceptible or vulnerable to cancer from radiation exposure.

My research first led me to the website of Dr. Ian Fairlie, a scientist who has published several papers on the health effects of radiation in peer reviewed journals. He wrote a post specifically on the matter before the NRC. There, he mentions several peer reviewed papers to support his case, such as Leuraud et al 2015. That one found Leukemia was more common in radiation monitored workers. It is an important finding because such workers are exposed to low, but higher than background, amounts of ionizing radiation over long periods of time. The finding directly supports the LNT model. It is also interesting since the work was partly supported by the US government, including funding and researchers from the Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health, and the Department of Health and Human Services.

However, Dr. Fairlie also mentions a paper on the rates of childhood Leukemia near nuclear power plants in Germany (Kaatsch et al, 2008). While it does show that childhood Leukemia is more than twice as common within 5km of the power plants, it does not establish the cause. The authors concluded that radiation may have little to do with it. The paper states that the natural background radiation over the areas studied is much greater than that released by the power plants, but it seems neither was measured for the study. It also mentions other possible causes of Leukemia; there is some evidence of a cause by infection (Alexander et al, 1998). This may seem to question the results of the Leuraud et al study, but that one had the advantage of data from personal dosimeters while the Kaatsch et al study does not. In any case, it looks like Kaatsch et al really isn’t good evidence in support of, or contrary to, the LNT model.

From there, my research jumped to the petition of Dr. Carol Marcus to the NRC that seems to have started the process of reconsidering LNT and considering hormesis as a replacement. In it, Dr. Marcus writes, “There has never been scientifically valid support for this LNT hypothesis”. To support this statement, she mentions only one report (“Evaluation of the Linear-Nonthreshold Dose-Response Model for Ionizing Radiation”, from the National Council on Radiation Protection and Measurements, 2001, executive summary link) on the matter and presents an attack on it by Dr. Zbigniew Jaworowski and Dr. Michael Waligórski. This is evidently supposed to discredit all evidence and research that is consistent with LNT rather than hormesis, but it doesn’t refute the kind of empirical data seen in the Leuraud et al study, and likely others as well.

The existence of good peer reviewed science supporting LNT makes Dr. Marcus’s claim of no valid scientific support for LNT false. However, there is also such science in support of hormesis, so she might yet be proven to be supporting the correct model. I do not think the NRC should be making radiation monitored workers and the general public accept a potentially greater risk without a scientific consensus on the matter. I do not think it is proper even if a little more radiation might decrease the risk of so called solid cancers, as claimed by Dr. Marcus in the petition, if it also increases the risk of Leukemia. I think further research should be done on the effects of a total lack of ionizing radiation, including natural background radiation. This could be very helpful in better understanding the biological response to radiation.

In researching all this, I also looked into the background of the scientists. The really interesting one was hormesis proponent Dr. Jaworowski. In addition to publishing science on the effects of radiation in peer reviewed journals, he also published articles about or related to global warming. He claimed people should be more worried about an upcoming ice age, but published many of these articles without peer review, and they have been largely discredited. So far as I can tell, he was not a climatologist. He had a number of articles published in 21st Century Science and Technology, and Executive Intelligence Review; both are publications with ties to Lyndon LaRouche, which brings in some very strange and unusual politics.

XPS 13 Developer Edition (2015): Rushed

July 29, 2015

Dell’s XPS 13 Developer Edition from 2015 has some nice hardware, but it was rushed to market. The machine isn’t usable as delivered. Either no QA testing was done, or an impossible deadline was imposed. I’m pretty sure it was a deadline. It took the managers a while to realize this is a problem, but they finally decided to act by no longer selling it until they fix it. Someone with basic Linux sysadmin skills and some time to install the various fixes can get it working quite well so long as they aren’t a fast typist; key repeats are only mostly fixed as of now. With the fixes, I find it can still fail to resume from suspend, but it doesn’t happen often and leaves nothing logged to indicate the problem. Overall, I like the machine more than I should.

I went for a model with the 1920×1080 display and no touchscreen because I don’t like glossy displays. I do like resolution, and this is plenty for the small size. I’ll be using font anti-aliasing for a while longer; probably could turn that off with the higher resolution model.

Dell shipped it in a cardboard shipping box that distinguished itself by including a plastic handle so it can be carried around like a briefcase. Inside that is the power supply and cable, and a black box. A very nice looking black box. Inside that is the XPS 13 with foam above and a plastic tray below that is very well fitted. Clearly a lot of though went into this, more than I have managed to convey. Underneath the computer is a folded paper on using MS Windows. I know the Developer Edition uses the same hardware as the XPS 13 with Windows pre-installed, but I didn’t expect it would include the same, but useless, documentation. Not a big deal, but maybe a harbinger.

When turned on, the XPS 13 quickly boots and brings up a legal document. Fun stuff. After a short delay, although long enough that I thought it would let me read the document, a video starts playing full-screen. The video cannot be stopped, and trying to switch away doesn’t work. Alt-tab brings up the window switching menu, but it doesn’t switch. I also couldn’t mute the audio or change the volume; I’m used to pressing two keys to get the functions like volume control, but only one was required. I eventually figured that out, but it did make the compulsory video rather annoying. What made it obnoxious was that it was just an animation of logos zooming about put to music that meant nothing.

With the video done, I got back to reading the legalese. I needed to scroll the text, so I got to use the mouse. It occasionally quit working for two seconds or so before accepting more input. There is a fix for this from Dell, but not on the part of their website for supporting purchased products. That is, if you were to purchase a XPS 13 Developer Edition, log into your account on Dell’s site, find the item you bought, and try to download fixes, then you won’t find them. The fixes are on Dell’s website, just not there. It is apparently reserved for Windows related fixes and system firmware, aka BIOS. A search engine is the best way to find the Developer Edition fixes. Once applied, the mouse no longer ignores input, but occasionally when trying to scroll with it, the system will respond like alt-tab is down and cycles through the windows super fast. I have no idea how to reproduce the issue. It hasn’t happened in the last couple of weeks, but I’m not sure it won’t happen again.

Soon after that, I got to try typing. It regularly repeated key inputs until another key was pressed. This didn’t take fast typing to observe. A BIOS update mostly corrects it, but people who type really fast report that the change only mitigates the problem. The issue affects Windows as well. Considering that keyboards are common computer hardware that have generally worked well for decades, and that Dell botched it in 2015, it is amazing the rest of the hardware came out as well as it did. The group within Dell that put Ubuntu Linux on the XPS 13 clearly had a hard time dealing with this hardware.

After this, it was time to update the installed software. The Ubuntu Software Updater ran until it got to grub, then seemed to hang on the update while still responding to user input. After waiting half an hour, I killed it and what seemed to be a related process that was eating processor time. Then I used apt-get from a shell and ran whatever command it told me to run when it complained about some problem. Since then, updates have worked correctly. I have to wonder if an uncorrected hung update would render the system unbootable.

Following that, the system needed updates for the graphics to resume reliably from suspend. I still have an occasional issue with it, but matters greatly improved. A remaining issue is how the screen brightness automatically adjusts: it darkens for a mostly dark frame, brightens for a bright frame, and offers no user configurability at all. I was worried this would be an issue for working with photographs, but the screen’s limited color gamut, at least compared to another display I have, has proven a much bigger issue. I just have to learn to avoid over saturating the color.

Other than that is an occasional crash for no apparent reason. I’ve had it happen shortly after booting the computer and starting to browse the web. It was occurring twice a week, but hasn’t happened in a couple weeks or so; maybe something was fixed.

The XPS 13 Developer Edition was in no condition to ship. Asus did a much better job with their Eee PC line; they might have been limited by their Linux distribution, but they worked fine right out of the box. Still, I’d rather not buy a computer with Windows pre-installed, and I like that Dell is going through some effort to support Linux, including getting patches into the mainline kernel to improve hardware support. I’m guessing the XPS 13 issues were unexpectedly time consuming to fix and management didn’t want to wait.

The Sigma 18-35mm f/1.8, firmware updates, and auto-focus

July 27, 2015

Here is the quick summary: the firmware update for the Canon mount Sigma 18-35mm f/1.8 (2014-8-22) did more than add support for the EOS C100, as claimed by Sigma. It also changed the lens ID from 137 to 150, and seems to have improved auto-focus accuracy, although not precision. Update: The auto-focus is as good as a Canon EF 85mm f/1.8.

I got one of Sigma’s 18-35mm f/1.8 lenses to go with a new Canon EOS 70D over a year ago. At a New Year’s event with a local band that includes a neighbor of mine, I took a bunch of pictures with this combination. The result seemed to work pretty well in spite of the low light at the outdoor venue. However, I have found that many images I have taken with the lens since then have not been in focus when using the optical viewfinder. This is a fairly common problem with Sigma lenses, although it doesn’t account for its early good performance. I’m guessing that at the New Year’s event most of the pictures had a deeper depth of field from focusing far enough away.

To deal with the problem, I got one of Sigma’s dock gizmos. I printed out a focus test page and made a table that I filled out with the adjustments needed. Every time I used the dock, the Sigma software asked about updating the lens firmware. I always refused because Sigma claims they just added support for a camera I don’t have. I don’t like to update things unless the update is actually beneficial. It is a way of limiting the chances of dealing with an update that breaks something.

Auto-focus calibration tool

Auto-focus calibration tool

The attempt at improving auto-focus didn’t go well. I got very contradictory results from two attempts, each starting from no adjustments, and neither improved the results. I figured the paper at a 45 degree angle was to blame, so I built a focus target out of Lego. When I cleared the adjustments on the lens before testing, I decided to update the lens firmware just to keep the software from incessantly asking about it. Every test I did with the Lego target suggested the lens was fine. Some tests with more common subjects suggest the accuracy is decent, but the precision still isn’t as good as with Canon lenses, so it is still a good idea to take several pictures and review them.

The software I use for keeping track of my photos, Digikam, did not at first identify this lens. Instead it called it lens 137; it has since been updated. I think the Canon protocol uses an 8-bit unsigned integer to identify the lens model, although now additional information like the focal length range is needed to identify a specific lens model. Since I updated the firmware, Digikam identifies the new images as being taken with lens 150.

I don’t know if this change from 137 to 150 is needed to make the lens work with the EOS C100. It is possible that the change will affect how the lens and camera work together. From what I’m seeing, it has a favorable effect on auto-focus performance with my EOS 70D. I don’t know why Sigma wouldn’t mention this, and I really don’t like the short list of changes common in the photographic industry for such updates. I have suspected that some changes are omitted from the public list of changes, and my experience with Sigma’s 18-35mm f/1.8 lens deepens that suspicion.

Long commands on Windows with SCons

January 15, 2015

At my job, where one of my tasks is to handle builds for Windows software using SCons, I recently moved from using SCons 1.2 with some custom modifications to an unmodified SCons 2.3.4. The change comes along with a move to using a newer Visual Studio while still supporting builds with an older one. The newer SCons has trouble with issuing some commands, just like the older version. Neither version can run programs that have whitespace in their path if the entire command is longer than some threshold. The custom modifications I made were to correct the problem, but this time I wanted something less custom and easier to support. Here is a link to the solution I developed; read on for the details.

With shorter commands, SCons issues the commands about the same way it does on Linux, and it works fine. Longer commands run into trouble on Windows; I’m told it has to do with the C runtime libraries. SCons works around this by placing most of a long command into a temporary file and then passing the name of the file preceded by a ‘@’ character as the first command line argument. Microsoft’s compiler and linker will read the temporary file for their arguments.

The implementation of this has a flaw that makes it useless using default installations of Visual Studio. SCons first puts together the command line as though no special long command handling is required. If the result is too long, the command is modified to use a temporary file but is parsed incorrectly; the program to run is taken to be all the characters up to the first whitespace. This makes the command to run “C:\Program”, which isn’t a program, or even an existent path. Everything after the first whitespace is put into the temporary file. It may start with “Files (x86)\Microsoft Visual Studio 10.0\VC\bin\cl.exe”, for example.

I found an attempt at getting SCons to use long commands written by Phil Martin. He provided SCons with a different way to spawn processes. Unfortunately, it doesn’t handle the whitespace issue. By the time the spawn function is called, SCons has already modified the command to use the temporary file. Nevertheless, his implementation was a good place to start. Like his implementation, mine also requires PyWin32.

I modified it to detect the use of a temporary file. When used, the spawn function opens and reads the temporary file to rebuild the complete command line. Then it figures out the whole path to the program, including whitespace, and separates that from the arguments. I also made a modification to produce better error messages from CreateProcess().

Finding the path to the program works best when there is some delimiter in the program path. It is common on Windows to enclose paths that have whitespace with double quotes. The command constructed by SCons using the default program paths lack this delimiter. I solved this by supplying new complete paths for all programs used in Microsoft’s toolchain. It is quite a bother, but I did it the best and most complete way I could figure. This includes creating some build configuration options for several paths, and making two build environments: one for x86 targets and another for amd64 targets.

Hopefully this will help someone else. I really don’t understand why this bug has been around for so long considering that whitespace in paths on Windows is very common.

False Steps

The Space Race as it might have been

You Control The Action!

High Frontier

the space colony simulation game

Simple Climate

Straightforwardly explaining climate change, so you can read, react and then get on with your life.