Fireflying
In Duck soup (1933) Rufus T. Firefly, President of Fredonia, insults Ambassador Trentino of Sylvania and it looks like war. Mrs. Teesdale arranges for Firefly to apologize to Trentino and prevent war. As Firefly waits for Trentino to arrive, he worries what if Trentino refuses to shake his hand, and becomes so angry at that hypothetical insult that he insults trentino when he arrives and the war is on.
https://www.google.com/search?q=Duck+Soup&newwindow=1&tbm=vid&ei=ZFUsWs_2OqypggePzK3ACQ&start=10&sa=N&biw=1536&bih=1072&dpr=0.83
In his dishonor, I call such behavior "Fireflying".
In Star Trek some people - like first year philosophy students - worry too much about hypothetical problems like Rufus T. Firefly does:
In "A Matter of Time" Picard asks Rasmusson for advice about a difficult decision that could mean life or death for thousands or even millions of people.
RASMUSSEN: Let me put it to you this way. If I were to tell you that none of those people died, you'd easily conclude that you tried your solution and it succeeded. So, you'd confidently try again. No harm in that. But what if I were to tell you they all died? What then? Obviously, you'd decide not to make the same mistake twice. Now, what if one of those people grew up
PICARD: Yes, Professor, I know. What if one of those lives I save down there is a child who grows up to be the next Adolf Hitler or Khan Singh? Every first year philosophy student have been asked that question ever since the earliest wormholes were discovered. But this is not a class in temporal logic. It's not theoretical, it's not hypothetical, it's real. Surely you see that?
I think that the first year philosophy students are wrong to worry about the possibility of saving a child who grows up to be the next Adolf Hitler.
If you save a big group of people, some of them will kill people, and some of their descendants will kill people. But so what?
Out of all the tens and hundreds of billions of humans who have ever lived, only five - Genghis Khan, Tamerlane, Hitler, Stalin, and Mao - are believed to have caused tens of millions of deaths. Only a few hundred are believed to have killed hundreds of thousands or millions of deaths.
Statistically, Humans who cause death and destruction on a large scale are a tiny minority of Humans.
If you save the lives of thousands, millions, or billions of persons, each day that each one lives after the date of their otherwise certain death will be a clearly predictable good result of that action. And the same for every day of life for every one of their descendants for generation after generation, century after century, millennia after millennia, eon after eon. The good you do might become almost infinitely great in time. And the comparatively rare evil deeds done by them and their descendants will be much less important than the good you will have done.
So I say that the first year philosophy students who worried about the comparatively rare, small scale, and unpredictable bad side effects of doing large scale good, instead of focusing on the common, almost infinitely large, and predictable good effects of doing large scale good, are making mountains out of molehills, or as I like to say, "Fireflying".