Why Trump Kept His Lead

Many high-brow conservatives watching Trump’s debate believed that he imploded on-air, and even went as far to dismiss all of the polls that show him with a clear lead over every other candidate. They believe that choosing a leader is a technocratic matter – find the man with the right set of policies, with the right stance on issues, let him govern and the output will be inherently positive. The man himself does not matter, merely the formula. I believe the reality of the situation is quite the opposite, geopolitics does not leave much room for maneuvering towards ideological positions. It depends instead on how the few options one has are executed, which in turn depends on the character, not the ideology, of the POTUS.

They were incensed that Trump did not elaborate on his positions, instead preferring to fire back at and deflect criticism thrown at him by the moderators. They miss the entire point of debates – they exist to test a candidates ability to handle themselves on live TV. It is a test of character, one that Trump won by a wide margin. Everytime a moderator hit him with a tough question and he didn’t back down, he won yet another battle. He ended up with more airtime in the debate than any other candidate and had double the time that Rand Paul had. If it seems bizarre to say that Trump won in a test of character, it’s only because the politicians that the system buys and sells are so lame and lukewarm. If tomorrow Jeb Bush’s advisors had a poll showing high heels and miniskirts to be indicative of winning Iowa and New Hampshire, then the next day Bush would be strutting around showing his legs off to voters.

Instead of apologizing or doubling back on giving money to Hillary, Trump doubles down and admits that he’s given money to nearly every other candidate there. His lenders lost money? Double down on his character as a real world businessman and contrast it to the moderators unrealistic view of the financial world. Unlike most candidates, Trump is able to project a realistic and believable character to people, which contrasts to the GOP’s current crop of boyscouts trying to one-up each other as Mr. Rogers. If these people are scared to debate a man like Trump, then how in the world are they going to go head-to-head against Assad, Putin or Suleimani? People calling on Trump to apologize miss the point entirely: If he backs down or starts apologizing for being himself he will destroy the thing that makes him so different from other candidates. Trump follows a President that made a famous remark about drawing a redline in the sand regarding Assad’s use of chemical weapons and followed up by doing nothing. He is a President who has become infamous for apologizing for America and it’s Christian majority, even when it was inappropriate to do so. Trump’s entire campaign leverages the humiliation the current POTUS has inflicted on America and wraps it into the simple slogan, “Make America Great Again”. Rather than crafting a strategy to reconnect to their base in the way an outsider like Trump did, they hold their own base in contempt using the same words and values that their bitter enemies normally use to lambast them.

The other, even lamer criticism is that Trump is being rude and insensitive and is therefore is not suitable Presidential material. In truth, this veneer of politeness has not been the norm for elections in most of US history. There is no tradition of civility in American politics. Andrew Jackson was a fountain of profanity and insults and his opposition replied likewise, famously calling his wife a prostitute. The feud between John Adams, Alexander Hamilton and Thomas Jefferson is legendary.

“That bastard brat of a Scottish peddler! His ambition, his restlessness and all his grandiose schemes come, I’m convinced, from a superabundance of secretions, which he couldn’t find enough whores to absorb!” — John Adams on Alexander Hamilton

Indeed, Trump’s politically incorrect dialogue has created an entertaining drama where viewers are left to wonder, how far will he go? No one denies that the strategy works, they just lament that their dorkier candidates won’t leverage it to their advantage. No one can out-Trump Trump, but it does show that excessive groveling is unfitting for a POTUS candidate.

CLxkCegUMAAl0UK CLxj5ckUwAAHkbv

You know your optics are terrible when riding a motorcycle somehow makes you look dorkier than you normally do. The only other candidate who didn’t look like a floundering dork was Kasich, though he couldn’t beat the Donald in rhetoric. I’m not saying you should get excited about a malignant narcissist running for office, I’m just saying he’s better at it than most others who have tried.

All Trump has to do to lose his lead is start backing down from fights and start groveling when challenged. The debates aren’t about the issues, it’s a trial by fire to see if a candidate will crack under pressure. The issues are just a tool to bludgeon them with. How the candidate responds to personal attacks further reveal the character of a candidate, which is why enforcing an air of artificial civility becomes counter-productive if taken to an extreme.

Media Corruption

Rolling Stone & UVA Rape Hoax: Why The Media Keeps Bullshitting You – It’s For Your Own Good

The internet is in an uproar over a rape hoax story. The gist of it is that Rolling Stone author published an article about a UVA Fraternity that had an initiation ritual involving gangrape and broken beer bottles. The story was ran with a minimum of fact checking, despite the fact that the author went through a great deal of searching to find a victim who would tell the right kind of rape story:


Magazine writer Sabrina Rubin Erdely knew she wanted to write about sexual assaults at an elite university. What she didn’t know was which university.

So, for six weeks starting in June, Erdely interviewed students from across the country. She talked to people at Harvard, Yale, Princeton and her alma mater, the University of Pennsylvania. None of those schools felt quite right. But one did: the University of Virginia, a public school, Southern and genteel, brimming with what Erdely calls “super-smart kids” and steeped in the legacy of its founder, Thomas Jefferson.


There was no doubt that she could find rapes that happen on campus, they just weren’t the right kind of rapes. She needed something that hit all of the cognitive biases of her audience in order to tell the right kind of story. Journalists like Sabrina Erdely are working around the clock to destroy what little credibility their left-wing publishers had left by valuing sensemaking over investigation.


Journalists now exist to serve as advocates for causes, not as eyewitnesses to report events to the public. And like any good advocate, they will not willingly surrender any ground that would threaten the advancement of their cause. The difference between this type of journalism and actual advocacy journalism is that they are not transparent in their advocacy, they hide behind objectivity to cloak their propaganda. These advocates rarely concern themselves with taking on actual rape cultures that create things like the Rotherham abuse scandal. That would be dangerous and actually disrupt the status quo. Instead they are concerned with furthering the status quo from positions of power in the established media while wearing the guise of rebels.


To quote the assistant editor of the Rolling Stone hoax story:

Ultimately, though, from where I sit in Charlottesville, to let fact checking define the narrative would be a huge mistake.


In other words, hastily put together propaganda without a sliver of evidence of fact checking. More time was spent looking for someone to tell the right kind of story than was spent fact checking the story. Media outlets initially avoided publishing any op-eds that would contradict the Rolling Stone story, it wasn’t until a blog post went viral that retractions were made. Rolling Stone has since put out up an apology, then redacted and put up an edited apology. It’s worth noting that Sabrina has been writing for Rolling Stone for years and this is not the first time discrepancies have been noted in her articles.


Fun fact of the day, trust in media is at an all time low in America:

WASHINGTON, D.C. — After registering slightly higher trust last year, Americans’ confidence in the media’s ability to report “the news fully, accurately, and fairly” has returned to its previous all-time low of 40%. Americans’ trust in mass media has generally been edging downward from higher levels in the late 1990s and the early 2000s.

Eric Garner Media Corruption

What The Media Isn’t Telling You About Eric Garner’s Death

The autopsy results have shown that there was no damage to his neck bones or windpipe from the hold and he did not die of asphyxia. The media has rightfully pointed out that chokeholds that strangle the windpipe and asphyxiate suspects are banned by the NYPD (but not by law). It appears the officer had Garner in a headlock and was using the headlock as leverage to take him down. Once Garner is on the ground the officer executes a vascular choke that prevents arteries in the neck from supplying the brain with blood. This is different from an air choke that targets the windpipe. You’ll note in the video that Garner is saying that he cannot breathe repeatedly, which should clue in the observant viewer that his windpipe is not being obstructed. Garner’s preexisting medical conditions combined with the chest, neck (but not throat) compression and laying in the prone position caused his death.

diabetes, sleep apnea, and asthma so severe that he had to quit his job as a horticulturist for the city’s parks department. He wheezed when he talked and could not walk a block without resting, they said.

The Gracie brothers put together a video a while back where they explain it:


The details will be paved over in favor of pushing a larger agenda.


“You never let a serious crisis go to waste. And what I mean by that it’s an opportunity to do things you think you could not do before.” – Rahm Emanuel

Mathematics Problem Solving

Mental Calculators

Interesting piece on how mental calculators are competing against each other to quickly solve math problems:

The high point of the abacus calendar is the All Japan Soroban Championship, which took place earlier this year in Kyoto.

And the high point of the championship is the category called “Flash Anzan” – which does not require an abacus at all.

Or rather, it requires contestants to use the mental image of an abacus. Since when you get very good at the abacus it is possible to calculate simply by imagining one.

In Flash Anzan, 15 numbers are flashed consecutively on a giant screen. Each number is between 100 and 999. The challenge is to add them up.

Simple, right? Except the numbers are flashed so fast you can barely read them.

I was at this year’s championship to see Takeo Sasano, a school clerk in his 30s, break his own world record: he got the correct answer when the numbers were flashed in 1.70 seconds. In the clip below, taken shortly before, the 15 numbers flash in 1.85 seconds. The speed is so fast I doubt you can even read one of the numbers.

I’ve often wondered at how 3d visual displays, like Google Glass, are going to change the way we work with and augment data. It may be possible that we can speed up our own performance dramatically alongside computers that we work with.


Click to access Pesenti.pdf

Calculating prodigies are individuals who are exceptional at quickly and accurately solving complex mental calculations. With positron emission tomography (PET), we investigated the neural bases of  the cognitive abilities of an expert calculator and a group of non-experts, contrasting complex mental calculation to memory retrieval of arithmetic facts. We demonstrated that calculation expertise was not due to increased activity of processes that exist in non-experts; rather, the expert and the non-experts used different brain areas for calculation. We found that the expert could switch between short-term effort-requiring storage strategies and highly efficient episodic memory encoding and retrieval, a process that was sustained by right prefrontal and medial temporal areas.

Inspired by Ribot’s psychological work (1881), they believed in the existence of not one type of memory but several partial, special, and local memories, each devoted to a particular domain. In all arithmetical prodigies, memory for digits is abnormally developed compared with other memories. Inaudi was considered to be an auditory memory-based mental calculator; when memorizing digits, he did not rely onthe appearance of the items or create visual imagery of any kind. Rather, he remembered digits principally by their sounds. Inaudi’s methods of calculation and memorization were original and different from those used by Diamandi, who was a typical visual memory-based mental calculator. The experiments presented in the 1893 article were among the first scientific demonstrations of the importance to psychology of studying different types of memory. The present work gives a translation of this pioneering experimental article on expert calculators by Charcot and Binet, instructive for the comprehension of normal memory. that people with mild intellectual disabilities (ID) have difficulty in ‘weighing up’ information, defined as integrating disparate items of information in order to reach a decision. However, this problem could be overcome by the use of a visual aid to decision making. In an earlier study, participants were taught to translate information about the pros and cons of different choices into a single evaluative dimension, by manipulating green (good) and red (bad) bars of varying lengths (corresponding to the value ascribed). Use of the visualcalculator increased the consistency of performance (and decreased impulsive responding) in a temporal discounting task, and increased the amount of information that participants provided to justify their decisions in scenario-based financial decision-making tasks.

Previous research has demonstrated that people with mild intellectual disabilities (ID) have difficulty in ‘weighing up’ information, defined as integrating disparate items of information in order to reach a decision. However, this problem could be overcome by the use of a visual aid to decision making. In an earlier study, participants were taught to translate information about the pros and cons of different choices into a single evaluative dimension, by manipulating green (good) and red (bad) bars of varying lengths (corresponding to the value ascribed). Use of the visualcalculator increased the consistency of performance (and decreased impulsive responding) in a temporal discounting task, and increased the amount of information that participants provided to justify their decisions in scenario-based financial decision-making tasks.

The results suggest that the visual calculator has practical applicability to support decision making by people with mild ID in community settings.

Among the many examples of the congenital form are the calendar calculators, who can quickly provide the day of the week for any date in the past; the musical savants, who have perfect pitch; and the hyperlexics, who (in one case) can read a page in 8s and recall the text later at a 99% level. Other types of talents and artistic skills involving three-dimensional drawing, map memory, poetry, painting, and sculpturing are also observed. One savant could recite without error the value of Pi to 22,514 places. Persons with the acquired form develop outstanding skills after brain injury or disease, usually involving the left frontotemporal area. This type of injury seems to inhibit the “tyranny of the left hemisphere,” allowing the right hemisphere to develop the savant skills. Another way to inhibit the left frontotemporal area is to use transcranial magnetic stimulation in normal subjects; nearly one-half of these subjects can then perform new skills during the stimulation that they could not perform before. This type of finding indicates the potential in all of us for the development of savant skills in special circumstances.

In the present study, we examined cortical activation as a function of two different calculation strategies for mentally solving multidigit multiplication problems. The school strategy, equivalent to long multiplication, involves working from right to left. The expert strategy, used by “lightning” mental calculators (Staszewski, 1988), proceeds from left to right. The two strategies require essentially the same calculations, but have different working memory demands (the school strategy incurs greater demands). The school strategy produced significantly greater early activity in areas involved in attentional aspects of number processing (posterior superior parietal lobule, PSPL) and mental representation (posterior parietal cortex, PPC), but not in a numerical magnitude area (horizontal intraparietal sulcus, HIPS) or a semantic memory retrieval area (lateral inferior prefrontal cortex, LIPFC). An ACT-R model of the task successfully predicted BOLD responses in PPC and LIPFC, as well as in PSPL and HIPS.

No gross anatomical differences were observed. By morphological assessment, cerebral volume (1362 mL) was larger than normative literature values for adult males. The corpus callosum was intact and did not exhibit abnormal structural features. The right cerebral hemisphere was 1.9% larger than the left hemisphere; the right amygdala and right caudate nuclei were 24% and 9.9% larger, respectively, compared with the left side. In contrast, the putamen was 8.3% larger on the left side. Fractional anisotropy was increased on the right side as compared with the left for 4 of the 5 bilateral regions studied (the amygdala, caudate, frontal lobe, and hippocampus). Fiber tract bundle volumes were larger on the right side for the amygdala, hippocampus, frontal lobe, and occipital lobe. Both the left and the right hippocampi had substantially increased axial and mean diffusivities as compared with those of a comparison sample of nonsavant adult males. The corpus callosum and left amygdala also exhibited high axial, radial, and mean diffusivities. MR spectroscopy revealed markedly decreased γ-aminobutyric acid and glutamate in the parietal lobe.

See also:

hacker culture

Using Bitcoin To Avoid US Poker Laws

Here is an interesting step that might broaden the market for bitcoins. Right now bitcoins fails a simple convenience test – it can take an hour or more to convert the money into bitcoins. Now an online poker store has added bitcoins as a form of payment, giving it a much wider reach:

“Michael Hajduk had sunk one year and about $20,000 into developing his online poker site, Infiniti Poker, when the U.S. online gambling market imploded. On April 15, 2011, a day now known in the industry as Black Friday, the U.S. Department of Justice shut down the three biggest poker sites accessible to players in the U.S., indicting 11 people on charges of bank fraud, money laundering, and illegal gambling. … Infiniti Poker … plans to accept Bitcoin when it launches later this month. The online currency may allow American gamblers to avoid running afoul of complex U.S. laws that prevent businesses from knowingly accepting money transfers for Internet gambling purposes. ‘Because we’re using Bitcoin, we’re not using U.S. banks — it’s all peer-to-peer,’ Hajduk says. ‘I don’t believe we’ll be doing anything wrong.'”

Hajduk says the ability to store Bitcoins on players’ computers is appealing. “At the end of the day, [the government] cannot freeze your account because they cannot kick down the door to Bitcoin,” he says

There are other risks as well. In recent months hackers have pulled off several Bitcoin heists, and this summer Bitcoin Savings & Trust, billed as a “Bitcoin hedge fund,” made off with more than $5 million entrusted to the site by investors, in what appears to be a Ponzi scheme. Also, Bitcoin wallets can vanish as a result of hard-drive crashes or other computer problems. That’s how at least one user lost 50,000 Bitcoins, according to Peter Vessenes, chairman of Bitcoin Foundation, an organization that helps develop and promote the virtual currency.

The economy for the infamous SilkRoad is much smaller than the 1.5 trillion+ profits that come from the international drug trade:
Silk Road sellers have collectively had around $1.9 million of sales per month in recent months. Almost 1,400 sellers have participated in the marketplace, and they have collectively earned positive ratings from 97.8 percent of buyers. And the service is growing, with Silk Road’s estimated commission revenue roughly doubling between March and July of this year.

The current market price for all existing bitcoins is estimated at over $100 million.

economics Societies

Replying To Justin Boland On Crisis, Environmentalism & Change

Original post:

DER SPIEGEL: Professor Meadows, 40 years ago you published “The Limits to Growth” together with your wife and colleagues, a book that made you the intellectual father of the environmental movement. The core message of the book remains valid today: Humanity is ruthlessly exploiting global resources and is on the way to destroying itself. Do you believe that the ultimate collapse of our economic system can still be avoided?

Dennis Meadows: The problem that faces our societies is that we have developed industries and policies that were appropriate at a certain moment, but now start to reduce human welfare, like for example the oil and car industry. Their political and financial power is so great and they can prevent change. It is my expectation that they will succeed. This means that we are going to evolve through crisis, not through proactive change.

My response:

What is true in the US isn’t true in Germany, or in China, India or any number of other rising countries.

Take Germany, which has had a serious power crisis over the last few years. Germany decided to begin shutting down it’s nuclear plants almost overnight after Fukushima.

This destroyed several German industries that have been established for decades overnight because the cost of power rose to be one of the highest in Europe due to shortages. Other’s who could squeeze out small gains had to modify their technology to be more energy efficient.

Environmentalists fill a strange niche in this ecosystem – when you protest or delay the building of new reactors you only use the old ones, or rely more on coal plants.

So you end up with old reactor designs that don’t have passive safety standards to protect against meltdown during power outages or breaches, while also requiring more maintenance and more fuel. All of which means more jobs, but less overall efficiency. Another secondary effect of no new plants being built – most old designs (with the exception of France) only run their fuel cycle once, which creates a lot more waste to be recycled. Newer closed fuel designs burn and recycle the fuel, in some cases without leaving the reactor, which dramatically reduces waste.

Fukushima was built in the 1960’s, designs that old were supposed to be decommissioned in 30-40 years. In the case of Fukushima many of the issues weren’t just with the age of the design, but also with falsification of records, lying and inaction.

On the upside, Germany is leading the world in solar adoption. It will be several years until solar can provide for German energy needs no matter how much austerity they enforce. Was this crisis caused by “the solar cartel”, or panic and willful ignorance?

The thing is, no one can “stop” the technological progress across the world. But it can certainly be slowed down.

The reactors move to India, China and even the US. Same thing happens with GMOs, stem cells, aerogel or any other technology you want to bring up. Sometimes distribution is too difficult or costly, or it can be outright blocked by a cartel. Sometimes there are legal countermoves like with stem cells, other times you just move shop to a more enterprising district, or in the case of WikiSpeed the business model is so outdated that you can beat competitors just by not being ancient.

But the macro-pattern is still towards individual empowerment, decentralization and internationalization.

There will be periods that are without a safety net, that’s what I see. Many people don’t have a clear direction now that we have more freedom and less guidance from traditional institutions. As more power gets pushed to the individual and to small groups, it will depend on whether they want proactive change.

Africa International Affairs Warfare

M23 & Congo – 4GW In Action


For a period of about 2 weeks, the rebel M23s occupied the city of Goma in the Democratic Republic of the Congo. The government forces didn’t fight back, instead choosing to flee the city for secure government compounds. The U.N. forces concentrated themselves there, leaving the city to be plundered by the rebel forces. This isn’t a surprise as most of the Army officers had already been bought off by the rebels. Any political supporters of the government in the city could then be killed on a whim by the rebel forces. After the 2 weeks, M23 pulled out before international resistance could be formed. Both the US and UK pulled their aid from bordering Rwanda, who were likely supporting the rebels so that they could destabilize the Congo government, making it easier to gain access to the mineral rich area.


In other words, the rebels pulled out on their own terms after they had taken everything worth taking. As is standard operating procedure, many rebels simply took of their uniform and stayed behind to disrupt government infrastructure and programs, thereby further reducing the support and legitimacy of the government. The lesson is fairly simple, pay very close attention to the loyalties and self interest of everyone in a conflict region if you want to understand how it will develop. Look at the extended social networks, friends of friends.
“The soldiers we see here are the ones that took over this city? Is that it?” the 78-year-old said. “I think they are still here in hiding.”

Many residents are convinced that M23 soldiers swapped their military fatigues for civilian clothing and will remain in the city as “infiltrators”.

“Look, I am a Congolese. I am from this place. I can tell the difference between a civilian and a soldier. And, for sure, they are here,” 33-year-old mechanic Thierry Bisimwa told Al Jazeera. “Taking off their uniform and putting on civilian dress is a strategy.”

“We have a situation where army officials, in the middle of a war, were selling weapons to the M23 … what is going on?”

He was referring to General Gabriel Amisi, the DRC’s chief of land forces, who was suspended on November 23 when the UN alleged he had been “smuggling arms” to multiple rebel groups in the region.

According to people here, he was not the only military official playing both sides.

“We have an army with high-level officers selling arms and information to the other side. This is why they are so incompetent,” Bisimwa said.

There is a smouldering disdain for the United Nations here, as well as rage directed towards neighbouring Rwanda and Uganda for their roles in the crisis.

Intelligence International Affairs

What Gave Away Bin Laden’s Location


As you would expect, Osama Bin Laden kept messages to friends and family reasonably secure. However transmissions between his bodyguards and their families were not subjected to the same level of scrutiny. The fact that he stayed in such a high profile house was unusual however. Anyone with the slightest bit of curiosity would wonder about the purpose of a compound with 12 foot concrete walls and barbed wire. It is to be expected that he would have the cooperation of local military and intelligence elites, rebels have a very difficult time operating unless they stack the deck in their favor by allying with neighboring forces. Their lack of technological sophistication is also pretty standard, many documents have been captured unecrypted from insurgents because they don’t understand that encryption is very difficult to impossible to break if done properly.

Satellite phone calls that Osama bin Laden’s bodyguard made from July to August last year are believed to have helped US forces hunt down the Al Qaeda leader in the Pakistani compound where he was killed early Monday, according to local Pakistani intelligence sources.

US intelligence agencies tracked the Kuwaiti bodyguard’s calls from the compound to Al Qaeda associates in the cities of Kohat and Charsada in Khyber Pakhtunkhwa Province, a narrative that was corroborated by several sources.

From Wiki:

American intelligence officials discovered the whereabouts of Osama bin Laden by tracking one of his couriers. Information was collected from Guantánamo Bay detainees, who gave intelligence officers the courier’s pseudonym and said that he was a protégé of Khalid Sheikh Mohammed.[5] In 2007, U.S. officials discovered the courier’s real name and, in 2009, that he lived in Abbottābad, Pakistan.[6] Using satellite photos and intelligence reports, the CIA surmised the inhabitants of the mansion. In September, the CIA concluded that the compound was “custom built to hide someone of significance” and that bin Laden’s residence there was very likely.[7][8] Officials surmised that he was living there with his youngest wife.[8]

Built in 2005, the three-story[12] mansion was located in a compound about 4 km (2.5 mi.) northeast of the center of Abbottabad.[7] While the compound was assessed by US officials at a value of USD 1 million, local real-estate agents assess the property value at USD 250 thousand.[13] On a lot about eight times the size of nearby houses, it was surrounded by 12- to 18-foot (3.7-5.5 m)[8] concrete walls topped with barbed wire.[7] There were two security gates and the third-floor balcony had a seven-foot-high (2.1 m) privacy wall.[12] There was no Internet or telephone service coming into the compound. Its residents burned their trash, unlike their neighbors, who simply set it out for collection. The compound is located (34°10′09″N 73°14′33″E) and 1.3 km (0.8 mi.) southwest of the closest point of the sprawling Pakistan Military Academy.[14] President Obama met with his national security advisors on March 14, 2011, in the first of five security meetings over six weeks. On April 29, at 8:20 a.m., Obama convened with Thomas DonilonJohn O. Brennan, and other security advisers in the Diplomatic Room, where he authorized a raid of the Abbottābad compound. The government of Pakistan was not informed of this decision.[7]


US Intelligence services contacted a Pakistani physician through the US NGO Save The Children, to help them set up a fake vaccination program that would allow them to collect DNA to identify the people inside of the compound. This lead to him being arrested and sentenced to 33 years for Treason, supposedly for links to a local tribal terrorist organization:

To identify the occupants of the compound, the CIA worked with doctor Shakil Afridi to organize a fake vaccination program. Nurses gained entry to the residence to vaccinate the children and extract DNA,[9] which could be compared to a sample from his sister, who died in Boston in 2010.[10] It’s not clear if the DNA was ever obtained.[11]

Colleagues at Jamrud Hospital in Pakistan’s northwestern Khyber tribal were suspicious of Dr. Shakeel Afridi’s, the hospital’s chief surgeon, absences which he explained as “business” to attend to in Abbottabad. Dr Afridi was accused of having taken a half-dozen World Health Organization cooler boxes without authorization. The containers are for inoculation campaigns, but no immunization drives were underway in Abbottabad or the Khyber agency.[11][12]

Pakistani investigators said in a July 2012 report that Afridi met 25 times with “foreign secret agents, received instructions and provided sensitive information to them.”[13] According to Pakistani reports, Afridi told investigators that the charity Save the Children helped facilitate his meeting with U.S. intelligence agents although the charity denies the charge. The report alleges that Save the Children’s Pakistan director introduced Afridi to a western woman in Islamabad and that Afridi and the woman met regularly afterwards.

Future Trends

You Are In The Future


There are some pundits proclaiming the eternal life of “pink collar” jobs in the service and hospitality industry. They are in vogue because that’s where the majority of the job growth has come from, they are easy to create and give people a way to scratch out a living doing rote work. This has lead to the false assumption that the majority of these jobs are irreplaceable by computers and robotics.

One of the prime candidates is nursing, which requires more education and training than most of the other service jobs. Amusingly enough, in an age when people are talking about replacing doctors who have decades of experience in general practice with narrow AI on cellphones and teleconferencing, people believe nurses and other “pink collar” jobs are immune. With the exception of NICU, CCU and other specialist nurses, most of the work that they do can be replaced or augmented by current technology as is, with a much smaller margin of error. Things like delivering medication at specific times, giving a patient ice water, moving patients to avoid bed sores, ensuring that a patient is not given food that is against his diet requirements (e.g. diabetics) and checking to see if a patient is faking a seizure to get attention.

Eliminating this frees up time, and would likely lead to staffing cuts. Even the software that nurses use for giving reports can be vastly improved, though you would have to use programmers who are actually competent. Current software requires hours of training because of the unintuitive design.

The other part of the argument is that human interaction cannot be replicated by machines, and therefore people will always want other humans to help them. This misses the point entirely, the people don’t care about the nurse, they care about how she serves them. When something comes along that can serve them in basic ways dramatically better, the nurse will be put out of work. If people were fooled by Eliza, they won’t mind expressing themselves to modern chatbots, mainly because they just want to express their feelings.


Pundits are still talking as if farming won’t be automated for 40 years, when we are already deploying self driving tractors and UAV’s for crop dusting. And more than likely, the crop that the farmer is harvesting is a GMO.

Nevada, California and Florida have already legalized self driving cars. We’ve developed simple plug-ins that stop you from browsing blocked sites after a set amount of time. The first of many pre-commitment devices that monitor, force and shame you into whatever you or society wants you to be:

If you agree—and only if you agree—Progressive Insurance will give you a device to install in your car that will rat you out for jack-rabbit starts and slamming on the brakes. * It’s a small thing that plugs into your on-board diagnostic system, and it transmits as you drive. If your little minder shows that you don’t act like Dale Earnhardt Jr. behind the wheel, you’ll save up to 30 percent on your auto insurance. Although there’s no official penalty for letting the company find out that you regularly lay down rubber, in fact you’ll pay more for coverage than will tamer drivers. You’ll also be acting to tame your own behavior by raising the price of recklessness.

Progressive’s driving spy is a sneaky example of the “precommitment device,” a technique that people use to bind themselves to their preferred desires, and a subject I have been studying for my new book about the problem of self-control,We Have Met the Enemy.

It’s too taboo to mention how quickly things are changing, optimism is alright just as long as you aren’t too specific. Complexity is now used as an excuse for not dealing with simple but emotionally difficult problems.

You’re in the future, start acting like it.


Open Source Intelligence Analysis – Demographics

Getting good demographics can help you to quickly understand the context of messages that circulate through different websites. The easiest method for large websites is to look them up on

For instance, we an obvious pattern that hispanics and blacks tend to visit conspiracy websites much more often than whites. We also see that many of the sites tend to have older visitors with higher incomes. The exception are data driven websites like Wikileaks, which tend to be lower income but highly educated viewers who are mostly white or asian.

If you can find facebook groups for websites like this, you can cross-check some of the basic information by looking at user photos, names, and ages (keep in mind that facebook users tend to be younger than average):

To get a quick introduction to the character of a website, simply do an imagesearch of it on google, e.g.: site:

Search through the websites looking for mentions of states and/or cities using google, e.g.: texas (don’t add a space between the search command and the website, so use Look for introduction threads or user profiles that list locations. Twitter accounts can also assist in this process.

With this information you can cross-correlate the cities members live in to get an idea of their general make-up, and how it compares to other demographic sources.

If there are a lot of unique images on the website, use google’s image search function to look around for other websites with the same images, which will expand your understanding of the psychographics of the users by finding similar sites and images.

For more google search ideas, look at “How to solve impossible problems: Daniel Russell’s awesome Google search technique”:

If you want to map out keywords and connections, use a graph similar to this:

A basic search gives us something like this:

Which shows us that we can also harvest data from youtube and amazon, as well as the smaller linked websites.

Now we have the basic demographics, we look for commonalities. Search through abstracts of psychology journals using or google scholar, looking for keywords related to conspiracy theories, demographic information and psychology journals.

We end up with some curious things like this:

This article examines the endorsement of conspiracy beliefs about birth control (e.g., the belief that birth control is a form of Black genocide) and their association with contraceptive attitudes and behavior among African Americans. The authors conducted a telephone survey with a random sample of 500 African Americans (aged 15-44). Many respondents endorsed birth control conspiracy beliefs, including conspiracy beliefs about Black genocide and the safety of contraceptive methods. Stronger conspiracy beliefs predicted more negative attitudes toward contraceptives. In addition, men with stronger contraceptive safety conspiracy beliefs were less likely to be currently using any birth control. Among current birth control users, women with stronger contraceptive safety conspiracy beliefs were less likely to be using contraceptive methods that must be obtained from a health care provider. Results suggest that conspiracy beliefs are a barrier to pregnancy prevention. Findings point to the need for addressing conspiracy beliefs in public health practice.


This study used canonical correlation to examine the relationship of 11 individual difference variables to two measures of beliefs in conspiracies. Undergraduates were administered a questionnaire that included these two measures (beliefs in specific conspiracies and attitudes toward the existence of conspiracies) and scales assessing the 11 variables. High levels of anomie, authoritarianism, and powerlessness, along with a low level of self-esteem, were related to beliefs in specific conspiracies, whereas high levels of external locus of control and hostility, along with a low level of trust, were related to attitudes toward the existence of conspiracies in general. These findings support the idea that beliefs in conspiracies are related to feelings of alienation, powerlessness, hostility, and being disadvantaged. There was no support for the idea that people believe in conspiracies because they provide simplified explanations of complex events.


From this information we can break them into traditional psychographics using stock models:

Now you can create a database that can be used for advanced analytic operations, using R, excel, SAS or a programming language like Python. R tends to be more effective for smaller sets less than 2GB because of it’s memory usage, but it has nearly all statistical functions anyone has thought to use which makes it very useful for experimental projects. SAS is commercial software that is mainly effective for large data sets. Excel is a decent entry level solution. Python is not quite as flexible as R yet, but it’s modules are improving and it can be interfaced with R.

Problem Solving

Dealing With Complexity – Solving Wicked Problems

Allen Downey, in his book Think Complexity, identifies a shift in the axes of scientific models:

Equation-based –> Simulation-based

Analysis –> computation

These new models allow us to not only predict behavior, but to also introduce randomness and give agents more detail than we see in classical approaches like Game Theory.

DARPA and various other government agencies and corporations lead the way in the early years for simulations. And slowly it filtered down through the intellectual strata until some K-12 programs started using NetLogo to teach kids about cell structures, the behavior of gas molecules and emergent complexity. The options at our fingertips still aren’t anywhere near as good as they will be in 5 or so years, but it’s what we have to work with.

Allen goes through several pages of changes in scientific modeling caused by the equation to simulation and analysis to computation shift, you can read it on page 16-22 if you’re curious (the book is free in PDF form).

This brings me around to the other point of my post: humans are horrible at working with complexity for a lot of reasons. One of the biggest I’ve seen so far is working memory. There is too much information, and people can’t sort through it quickly enough. And even when they can, they can’t hold enough of it inside of their heads to make the connections they need to understand their situation and plan out possible contingencies.

The average person can hold 5-9 objects in their working memory at a time, which seriously hinders their ability to figure out large complex scenarios with hundreds of thousands of probabilities. Simulations can work around this by giving you real time feedback on changes in variables while incorporating randomness, but for regular analysis finding ways to “visualize” information seems to work well, more on this in a minute.

Nassim Nicholas Taleb’s Black swan theory represents one of the many articulations of the “Unknown Unknown” categories we have to deal with. They are the side effect of an environment too complex for most people to understand. He used two examples, the 9/11 attack and the mortgage meltdown:
An example Taleb uses to explain his theory is the events of 11 September 2001. 9/11 was a shock to all common observers. Its ramifications continue to be felt in many ways: increased levels of security; “preventive” strikes or wars by Western governments. The coordinated, successful attack on the World Trade Center and The Pentagon using commercial airliners was virtually unthinkable at the time. However, with the benefit of hindsight, it has come to be seen as a predictable incident in the context of the changes in terrorist tactics

Common observers didn’t think it was possible, however many experts had already considered such a scenario:

After the 1988 bombing of Pan Am Flight 103 over Lockerbie, Scotland, Rescorla worried about a terrorist attack on the World Trade Center. In 1990, he and a former military colleague wrote a report to the Port Authority of New York and New Jersey, which owns the site, insisting on the need for more security in the parking garage. Their recommendations, which would have been expensive, were ignored, according to James B. Stewart‘s biography of Rescorla, Heart of a Soldier.[7]

After Rescorla’s fears were borne out by the 1993 World Trade Center bombing, he gained greater credibility and authority, which resulted in a change to the culture of Morgan Stanley,[7] whom he believed should have moved out of the building, as he continued to feel, as did his old American friend from Rhodesia, Dan Hill, that the World Trade Center was still a target for terrorists, and that the next attack could involve a plane crashing into one of the towers.[8] He recommended to his superiors at Morgan Stanley that the company leave Manhattan. Office space and labor costs were lower in New Jersey, and the firm’s employees and equipment would be safer in a proposed four-story building. However, this recommendation was not followed as the company’s lease at the World Trade Center did not terminate until 2006. At Rescorla’s insistence, all employees, including senior executives, then practiced emergency evacuations every three months.[9]

Feeling that the authorities lost legitimacy after they failed to respond to his 1990 warnings, he concluded that employees of Morgan Stanley, which was the largest tenant in the World Trade Center (occupying 22 floors), could not rely on first responders in an emergency, and needed to empower themselves through surprise fire drills, in which he trained employees to meet in the hallway between stairwells and go down the stairs, two by two, to the 44th floor.[7]

  • March 2001 – Italian intelligence warns of an al Qaeda plot in the United States involving a massive strike involving aircraft, based on their wiretap of al Qaeda cell in Milan.
  • July 2001 – Jordanian intelligence told US officials that al-Qaeda was planning an attack on American soil, and Egyptian intelligence warned the CIA that 20 al Qaeda Jihadists were in the United States, and that four of them were receiving flight training.
  • August 2001 – The Israeli Mossad gives the CIA a list of 19 terrorists living in the US and say that they appear to be planning to carry out an attack in the near future.
  • August 2001 – The United Kingdom is warned three times of an imminent al Qaeda attack in the United States, the third specifying multiple airplane hijackings. According to the Sunday Herald, the report is passed on to President Bush a short time later.
  • September 2001 – Egyptian intelligence warns American officials that al Qaeda is in the advanced stages of executing a significant operation against an American target, probably within the US.

Likewise, the mortgage meltdown was technically a black swan, but was easily predictable if you saw the pattern of ownership which clearly indicated fraud.

Taleb’s answer to this problem is not to try and predict possible future scenarios, but to simply make yourself more resilient. I don’t disagree with resilience, but I think an expanded approach can be taken here. The flaw that lead to the Black swans was not being able to make connections with information. If we don’t know what scenarios are most likely, we could just as easily end up putting too much effort on defense instead of looking for exponential returns on our resources.

How do we know when we’re looking at a very complex problem? Complex systems tend to be made up of diverse agents with interdependent relationships that change over time. So the question and the answers are changing. The behavior, emotions and motivations of the people in the problem are shifting. The connections between them also change. What does that mean?

For that we turn to Rittel and Webber:
Ten Criteria for Wicked Problems

Rittel and Webber characterise wicked problems by the following 10 criteria. (It has been pointed out that some of these criteria are closely related or have a high degree overlap, and that they should therefore be condensed into four or five more general criteria. I think that this is a mistake, and that we should treat these criteria as 10 heuristic perspectives which will help us better understand the nature of such complex social planning issues.)

1. There is no definite formulation of a wicked problem.

“The information needed to understand the problem depends upon one’s idea for solving it. This is to say: in order to describe a wicked problem in sufficient detail, one has to develop an exhaustive inventory for all the conceivable solutions ahead of time.” [This seemingly incredible criterion is in fact treatable. See below.]
2. Wicked problems have no stopping rules.

In solving a tame problem, “… the problem-solver knows when he has done his job. There are criteria that tell when the solution or a solution has been found”. With wicked problems you never come to a “final”, “complete” or “fully correct” solution – since you have no objective criteria for such. The problem is continually evolving and mutating. You stop when you run out of resources, when a result is subjectively deemed “good enough” or when we feel “we’ve done what we can…”
3. Solutions to wicked problems are not true-or-false, but better or worse.

The criteria for judging the validity of a “solution” to a wicked problem are strongly stakeholder dependent. However, the judgments of different stakeholders …”are likely to differ widely to accord with their group or personal interests, their special value-sets, and their ideological predilections.” Different stakeholders see different “solutions” as simply better or worse.
4. There is no immediate and no ultimate test of a solution to a wicked problem.

“… any solution, after being implemented, will generate waves of consequences over an extended – virtually an unbounded – period of time. Moreover, the next day’s consequences of the solution may yield utterly undesirable repercussions which outweigh the intended advantages or the advantages accomplished hitherto.”
5. Every solution to a wicked problem is a “one-shot operation”; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.

“… every implemented solution is consequential. It leaves “traces” that cannot be undone … And every attempt to reverse a decision or correct for the undesired consequences poses yet another set of wicked problems … .”
6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan.

“There are no criteria which enable one to prove that all the solutions to a wicked problem have been identified and considered. It may happen that no solution is found, owing to logical inconsistencies in the ‘picture’ of the problem.”
7. Every wicked problem is essentially unique.

“There are no classes of wicked problems in the sense that the principles of solution can be developed to fit all members of that class.” …Also, …”Part of the art of dealing with wicked problems is the art of not knowing too early which type of solution to apply.” [Note: this is very important point. See below.]
8. Every wicked problem can be considered to be a symptom of another [wicked] problem.

Also, many internal aspects of a wicked problem can be considered to be symptoms of other internal aspects of the same problem. A good deal of mutual and circular causality is involved, and the problem has many causal levels to consider. Complex judgements are required in order to determine an appropriate level of abstraction needed to define the problem.
9. The causes of a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution.

“There is no rule or procedure to determine the ‘correct’ explanation or combination of [explanations for a wicked problem]. The reason is that in dealing with wicked problems there are several more ways of refuting a hypothesis than there are permissible in the [e.g. physical] sciences.”
10. [With wicked problems,] the planner has no right to be wrong.

In “hard” science, the researcher is allowed to make hypotheses that are later refuted. Indeed, it is just such hypothesis generation that is a primary motive force behind scientific development (Ritchey, 1991). Thus one is not penalised for making hypothesis that turn out to be wrong. “In the world of … wicked problems no such immunity is tolerated. Here the aim is not to find the truth, but to improve some characteristic of the world where people live. Planners are liable for the consequences of the actions they generate …”

How, then, does one tackle wicked problems? Some 20 years after Rittel & Webber wrote their article, Jonathan Rosenhead (1996), of the London School of Economics, presented the following criteria for dealing with complex social planning problems – criteria that were clearly influenced by the ideas presented by Rittle, Webber and Ackoff.
Accommodate multiple alternative perspectives rather than prescribe single solutions
Function through group interaction and iteration rather than back office calculations
Generate ownership of the problem formulation through stakeholder participation and transparency

Facilitate a graphical (visual) representation of the problem space for the systematic, group exploration of a solution space
Focus on relationships between discrete alternatives rather than continuous variables
Concentrate on possibility rather than probability

The morphology grid is somewhat popular for mapping out these types of problems, figure out the “finite states”, the root variables that cause change, and then map them out in a grid format. Mark Proffitt’s Predictive Innovation does a good job of this:

Likewise, Lt. General Paul Van Riper mentions that the best leaders tend to be good at managing complex problems:

Future Trends Problem Solving Science technology

The Big Science and Technology Problems of the 21st Century

The big problems are mostly the same as in the 20th century and most of them stretch back much farther than that.

In fact, X Prize last year it declared a top eight list of key challenges that could end up being public competitions in the coming months or years.  The eight concepts or challenges included:

1. Water (“Super ‘Brita’ Water Prize”) – Develop a technology to solve the world’s number one cause of death: Lack of safe drinking water:

2. Personal Health Monitoring System (“OnStar for the Body Prize”) – Develop and demonstrate a system which continuously monitors an individual’s personal health-related data leading to early detection of disease or illness.

3. Energy & Water from Waste – Create and demonstrate a technology that generates off-grid water and energy for a small village derived from human and organic waste.

4. Around the World Ocean Survey – Create an autonomous underwater vehicle that can circumnavigate the world’s oceans, gathering data each step of the way.

5. Transforming Parentless Youth – Dramatically and positively change the outcome for significantly at risk foster children, reducing the number of incarcerations and unemployment rate by fifty-percent or more.

6. Brain-Computer Interface (“Mind over Matter”) – Enable high function, minimally invasive brain to computer interfaces that can turn thought into action.

7. Wireless Power Transmission – Wireless transmission of electricity over distances greater than 200 miles while losing less than two percent of the electricity during the transmission.

8. Ultra-Fast Point-To-Point Travel – Design and fly the world’s fastest point-to-point passenger travel system

#1 is probably done. Though it’s possible to create solutions at different scales of production.

#2 is going to be interesting as hackers will add functions to their sensors, and malicious ones will disrupt other peoples sensors for fun and profit.

I’ve heard of many implentations of #3, so it’s going to come down to what is most economical.

#4 is probably done, though a more robust version that can go deeper will be required to really satisfy the spirit of the goal.

#5 is quite difficult considering everything in our economy is forcing more people to be unemployed in the traditional sense. This is a judo problem, you can’t fix it within the normal means.

On #6, I’ve seen some simple EEG style sensors that can be integrated into games, but for the most part Brain-Machine interfaces are Sci-Fi. It’s easier to run prosthetics off of nerve impulses coming through limbs rather by sensing brainwaves without implants. So it’s going to take awhile to crack that problem. 3d interfaces are hitting the market now, both in VR headsets and 3d intractable  xbox kinect sensors:

The skeleton drawing system the kinect sensors use is software-based and can be modified, but other companies have already launched “improved” sensors that can be used on their own for 3d interaction.

#7 is interesting and we’ll have to see what is the most economical way of tackling it.

#8 needs to factor in safety, otherwise it won’t be widely used.

Some of the NRC’s problems are less thrilling, the benefits aren’t as clear to the man on the street, and it sort of reads like a list of “stuff we were going to do anyway, but we made a report for it”:

From the National Research Council report, the five challenges are:

1. How can the U.S. optics and photonics community invent technologies for the next factor of-100 cost-effective capacity increases in optical networks?

2. How can the U.S. optics and photonics community develop a seamless integration of photonics and electronics components as a mainstream platform for low-cost fabrication and packaging of systems on a chip for communications, sensing, medical, energy, and defense applications?

3. How can the U.S. military develop the required optical technologies to support platforms capable of wide-area surveillance, object identification and improved image resolution, high-bandwidth free-space communication, laser strike, and defense against missiles?

4. How can U.S. energy stakeholders achieve cost parity across the nation’s electric grid for solar power versus new fossil-fuel-powered electric plants by the year 2020?

5. How can the U.S. optics and photonics community develop optical sources and imaging tools to support an order of magnitude or more of increased resolution in manufacturing?

More interestingly, there is no way these questions can cover the whole of desires and needs that technology must fill for the 21st century. What are they missing?

Hydroponics technology

Automated Hydroponics Garden With Arduino

Arduino is useful because it adds the ability to interact with the real world with technology in a very flexible way. If anyone thinks technology hasn’t brought practical gains to everyday life, they need to check it out.

To see what others have built with Arduino check out this page:

A few tutorial websites:

Intelligence technology

Open Source Intelligence Analysis – We NSA Now

Working Thoughts:

1. Wikileaks can act as a secondary database. What we’ve seen so far makes it clear that most of the classified material is common knowledge but it could be useful.
2. Robert Steele is right that the humanitarian goodwill approach is superior. We’ve spent a lot of money in Afghanistan, but most of it was spent in unpopulated areas that were safe, the people who needed it didn’t get it. Lots of corruption. A tighter approach could be made.
3. Fiverr and penpal sites can also be useful for general cultural understanding or simple local tasks, e.g. :
4. Nearly all current prediction markets operate as zero-sum or negative-sum markets.

More OSINT Links:

“Dradis is a self-contained web application that provides a centralised repository of information to keep track of what has been done so far, and what is still ahead.”

Links for OSINT (Open Source Intelligence) by Randolph Hock

City Data:

Public Records:

Name/Location Search Engine:

“creepy is an application that allows you to gather geolocation related information about users from social networking platforms and image hosting services. The information is presented in a map inside the application where all the retrieved data is shown accompanied with relevant information (i.e. what was posted from that specific location) to provide context to the presentation.”

Here is a recent example that uses the Palantir platform and OSINT:

Less than four months ago, the Southern portion of Sudan seceded and formed South Sudan, only the 5th country to be created this century. In this session, we will demonstrate how Palantir can draw from a plethora of Open Source Intelligence (OSINT) data sources (including academic research, blogs, news media, NGO reports and United Nations studies) to rapidly construct an understanding of the conflict underlying this somewhat anomalous 21st Century event. Using a suite of Palantir Helpers developed for OSINT analysis, the video performs relational, temporal, statistical, geospatial, and social network analysis of over a dozen open sources of data.

See also:

Detecting Emergent Conflicts through Web Mining and Visualization



Intelligence technology

Open Source Intelligence Analysis – Palantir Does Indeed Kick Ass

Messing around with the Palantir Government suite right now. You can get an account and mess around with it here:

You have the ability to import/export data, filter access, set up collaborative teams and access to the open archives of the US Gov and some non profits. There are two tiers of users, novice users and power users:

Workspace Operations
Restrictions for Novice Users
Importing data

Novice users can only import data that is correctly mapped to the deployment ontology. Power users are exempt from this restriction.

The maximum number of rows in structured data sources that a Novice user can imported at one time is restricted by the NOVICE_IMPORT_STRUCTURED_MAX_ROWS system property. The default value for this property is 1000.

The maximum size of unstructured data sources that can be imported by a Novice user at one time is restricted by the NOVICE_IMPORT_UNSTRUCTURED_MAX_SIZE_IN_MB system property. The default value for this property is 5 megabytes.
Tagging text

The maximum number of tags that a Novice user can create using the Find and Tag helper is restricted by the system property NOVICE_FIND_AND_TAG_MAX_TAGS. The default setting for this property is 50.

Novice users cannot access the Tag All Occurrences in Tab option in the Browser’s Tag As dialog.
SearchAround search templates

Novice users cannot import SearchAround Templates from XML files.

Novice users cannot publish SearchAround templates for use by the entire deployment, and cannot edit published templates.
All other SearchAround features remain available.
Resolving Nexus Peering data conflicts
The Pending Changes application is available only in the Palantir Enterprise Platform, and is only accessible to Workspace users who belong to the Nexus Peering Data Managers user group.
Nexus Peering Data Managers use the Pending Changes application to check for, analyze, and resolve data conflicts that are not automatically resolved when a local nexus is synchronized with a peered nexus.
Deleting objects

Novice users cannot delete published objects.

Novice users cannot delete objects created or changed by other users.
Resolving objects

The maximum number of objects that Novice users can resolve together at one time is restricted by the NOVICE_RESOLVE_MAX_OBJECTS system property. This restriction does not apply to objects resolved by using existing object resolution suites in the Object Resolution Wizard or during data import.

Novice users may use the Object Resolution Wizard only when using existing object resolution suites. Novice users cannot perform Manual Object Resolution, and cannot record new resolution criteria as an Object Resolution Suite.
To learn more, see Resolving and Unresolving Objects in Workspace: Beyond the Basics.
Map application restrictions
All map metadata tools in the Layers helper are restricted.
Novice users cannot access features that allow sorting of layers by metadata, coloring by metadata, or the creation of new metadata. All other Layer helper functions remain available.

In case you didn’t get what I just said, you have access the same tools the FBI and CIA use, except some minor limitations and no access to classified documents. If you have access to Wolfram Alpha/Mathematica and can google for history on your topic of interest then most of the classified files will become redundant.

What about data mining on a budget?

Consider relying on a GPU(s). A CPU is designed to be multitasker that can quickly switch between actions, whereas a Graphical Processing Unit(GPU) is designed to do the same calculations repetitively while giving large increases in performance. The stacks in the listed papers, while giving exponentially higher speeds, did not use modern designs or graphics cards, which hindered them from running even faster.

The GPU (Graphics Prossessing Unit) is changing the face of large scale data mining by significantly speeding up the processing of data mining algorithms. For example, using the K-Means clustering algorithm, the GPU-accelerated version was found to be 200x-400x faster than the popular benchmark program MimeBench running on a single core CPU, and 6x-12x faster than a highly optimised CPU-only version running on an 8 core CPU workstation.

These GPU-accelerated performance results also hold for large data sets. For example in 2009 data set with 1 billion 2-dimensional data points and 1,000 clusters, the GPU-accelerated K-Means algorithm took 26 minutes (using a GTX 280 GPU with 240 cores) whilst the CPU-only version running on a single-core CPU workstation, using MimeBench, took close to 6 days (see research paper “Clustering Billions of Data Points using GPUs” by Ren Wu, and Bin Zhang, HP Laboratories). Substantial additional speed-ups are expected were the tests conducted today on the latest Fermi GPUs with 480 cores and 1 TFLOPS performance.

Over the last two years hundreds of research papers have been published, all confirming the substantial improvement in data mining that the GPU delivers. I will identify a further 7 data mining algorithms where substantial GPU acceleration have been achieved in the hope that it will stimulate your interest to start using GPUs to accelerate your data mining projects:

Hidden Markov Models (HMM) have many data mining applications such as financial economics, computational biology, addressing the challenges of financial time series modelling (non-stationary and non-linearity), analysing network intrusion logs, etc. Using parallel HMM algorithms designed for the GPU, researchers (see cuHMM: a CUDA Implementation of Hidden Markov Model Training and Classification by Chaun Lin, May 2009) were able to achieve performance speedup of up to 800x on a GPU compared with the time taken on a single-core CPU workstation.

Sorting is a very important part of many data mining application. Last month Duane Merrill and Andrew Grinshaw (from University of Virginia) reported achieving a very fast implementation of the radix sorting method and was able to exceed 1G keys/sec average sort rate on an the GTX480 (NVidia Fermi GPU). See

Density-based Clustering is an important paradigm in clustering since typically it is noise and outlier robust and very good at searching for clusters of arbitrary shape in metric and vector spaces. Tests have shown that the GPU speed-up ranged from 3.5x for 30k points to almost 15x for 2 million data points. A guaranteed GPU speedup factor of at least 10x was obtained on data sets consisting of more than 250k points. (See “Density-based Clustering using Graphics Processors” by Christian Bohm et al).

Similarity Join is an important building block for similarity search and data mining algorithms. Researchers using a special algorithm called Index-supported similarity join for the GPU to outperform the CPU by a factor of 15.9x on 180 Mbytes of data (See “Index-supported Similarity Join on Graphics Processors” by Christian Bohm et al).

Bayesian Mixture Models has applications in many areas and of particular interest is the Bayesian analysis of structured massive multivariate mixtures with large data sets. Recent research work (see “Understanding the GPU Programming for Statistical Computation: Studies in Massively Massive Mixtures” by Marc Suchard et al.) has demonstrated that an old generation GPU (GeForce GTX285 with 240 cores) was able to achieve a 120x speed-up over a quad-core CPU version.

Support Vector Machines (SVM) has many diverse data mining uses including classification and regression analysis. Training SVM and using them for classification remains computationally intensive. The GPU version of a SVM algorithm was found to be 43x-104x faster than SVM CPU version for building classification models and 112x-212x faster over SVM CPU version for building regression models. See “GPU Accelerated Support Vector Machines for Mining High-Throughput Screening Data” by Quan Liao, Jibo Wang, et al.

Kernel Machines. Algorithms based on kernel methods play a central part in data mining including modern machine learning and non-parametric statistics. Central to these algorithms are a number of linear operations on matrices of kernel functions which take as arguments the training and testing data. Recent work (See “GPUML: Graphical processes for speeding up kernel machines” by Balaji Srinivasan et al. 2009) involves transforming these Kernel Machines into parallel kernel algorithms on a GPU and the following are two example where considerable speed-ups were achieved; (1) To estimate the densities of 10,000 data points on 10,000 samples. The CPU implementation took 16 seconds whilst the GPU implementation took 13ms which is a significant speed-up will in excess of 1,230x; (2) In a Gaussian process regression, for regression 8 dimensional data the GPU took 2 seconds to make predictions whist the CPU version took hours to make the same prediction which again is a significant speed-up over the CPU version.

If you want to use the GPUs but you do not want to get your hands “dirty” writing CUDA C/C++ code (or other languages bindings such as Python, Java, .NET, Fortran, Perl, or Lau) then consider using MATLAB Parallel Computing Toolbox. This is a powerful solution for those who know MATLAB. Alternatively R now has GPU plugins. A subsequent post will cover using MATLAB and R for GPU accelerated data mining.

These are space whales flying through the sun:

Intelligence technology

Open Source Intelligence Analysis – Software, Methods, Resources

Research firm Applied Research Associates has just launched a website, Global Crowd Intelligence, that invites the public to sign up and try their hand at intelligence forecasting, BBC Future reports.

The website is part of an effort called Aggregative Contingent Estimation, sponsored by the Intelligence Advanced Research Projects Activity (Iarpa), to understand the potential benefits of crowdsourcing for predicting future events by making forecasting more like a game of spy versus spy.

The new website rewards players who successfully forecast future events by giving them privileged access to certain “missions,” and also allowing them to collect reputation points, which can then be used for online bragging rights. When contributors enter the new site, they start off as junior analysts, but eventually progress to higher levels, allowing them to work on privileged missions.

The idea of crowdsourcing geopolitical forecasting is increasing in popularity, and not just for spies.  Wikistrat, a private company touted as “the world’s first massively multiplayer online consultancy,” was founded in 2002, and is using crowdsourcing to generate scenarios about future geopolitical events. It recently released a report based on a crowdsourced simulation looking at China’s future naval powers.

Warnaar says that Wikistrat’s approach appears to rely on developing “what-if scenarios,” rather than attaching a probability to a specific event happening, which is the goal of the Iarpa project.

Paul Fernhout put together a good open letter awhile back on the need for this, it seems IARPA has put some effort forward for this purpose:

Paul Fernhout: Open Letter to the Intelligence Advanced Programs Research Agency (IARPA)

A first step towards that could be for IARPA to support better free software tools for “crowdsourced” public intelligence work involving using a social semantic desktop for sensemaking about open source data and building related open public action plans from that data to make local communities healthier, happier, more intrinsically secure, and also more mutually secure. Secure, healthy, prosperous, and happy local (and virtual) communities then can form together a secure, healthy, prosperous, and happy nation and planet in a non-ironic way. Details on that idea are publicly posted by me here in the form of a Proposal Abstract to the IARPA Incisive Analysis solicitation: “Social Semantic Desktop for Sensemaking on Threats and Opportunities”

So what kind of tools can an amateur use for making sense of data?

Data Mining and ACH

Here is a basic implementation of ACH:

Analysis of Competing Hypotheses (ACH) is a simple model for how to think about a complex problem when the available information is incomplete or ambiguous, as typically happens in intelligence analysis. The software downloadable here takes an analyst through a process for making a well-reasoned, analytical judgment. It is particularly useful for issues that require careful weighing of alternative explanations of what has happened, is happening, or is likely to happen in the future. It helps the analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult. ACH is grounded in basic insights from cognitive psychology, decision analysis, and the scientific method. It helps analysts protect themselves from avoidable error, and improves their chances of making a correct judgment.

RapidMiner – About 6% of data miners use it – Can use R as an extension with a GUI

R – 46% of data miners use this – in some ways better than commercial software – I’m not sure what the limit of this software is, incredibly powerful

Network Mapping

Multiple tools – Finding sets of key players in a network – Cultural domain analysis – Network visualization – Software for analyzing ego-network data – Software package for visualizing social networks

NodeXL is a free, open-source template for Microsoft® Excel® 2007 and 2010 that makes it easy to explore network graphs. With NodeXL, you can enter a network edge list in a worksheet, click a button and see your graph, all in the familiar environment of the Excel window.

Stanford Network Analysis Platform (SNAP) is a general purpose, high performance system for analysis and manipulation of large networks. Graphs consists of nodes and directed/undirected/multiple edges between the graph nodes. Networks are graphs with data on nodes and/or edges of the network.

*ORA is a dynamic meta-network assessment and analysis tool developed by CASOS at Carnegie Mellon. It contains hundreds of social network, dynamic network metrics, trail metrics, procedures for grouping nodes, identifying local patterns, comparing and contrasting networks, groups, and individuals from a dynamic meta-network perspective. *ORA has been used to examine how networks change through space and time, contains procedures for moving back and forth between trail data (e.g. who was where when) and network data (who is connected to whom, who is connected to where …), and has a variety of geo-spatial network metrics, and change detection techniques. *ORA can handle multi-mode, multi-plex, multi-level networks. It can identify key players, groups and vulnerabilities, model network changes over time, and perform COA analysis. It has been tested with large networks (106 nodes per 5 entity classes).Distance based, algorithmic, and statistical procedures for comparing and contrasting networks are part of this toolkit.

NetworkX is a Python language software package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.

Social Networks Visualizer (SocNetV) is a flexible and user-friendly tool for the analysis and visualization of Social Networks. It lets you construct networks (mathematical graphs) with a few clicks on a virtual canvas or load networks of various formats (GraphViz, GraphML, Adjacency, Pajek, UCINET, etc) and modify them to suit your needs. SocNetV also offers a built-in web crawler, allowing you to automatically create networks from all links found in a given initial URL.

SUBDUE is a graph-based knowledge discovery system that finds structural, relational patterns in data representing entities and relationships. SUBDUE represents data using a labeled, directed graph in which entities are represented by labeled vertices or subgraphs, and relationships are represented by labeled edges between the entities. SUBDUE uses the minimum description length (MDL) principle to identify patterns that minimize the number of bits needed to describe the input graph after being compressed by the pattern. SUBDUE can perform several learning tasks, including unsupervised learning, supervised learning, clustering and graph grammar learning. SUBDUE has been successfully applied in a number of areas, including bioinformatics, web structure mining, counter-terrorism, social network analysis, aviation and geology.

A range of tools for social network analysis, including node and graph-level indices, structural distance and covariance methods, structural equivalence detection, p* modeling, random graph generation, and 2D/3D network visualization.(R based) … index.html

statnet is a suite of software packages for network analysis that implement recent advances in the statistical modeling of networks. The analytic framework is based on Exponential family Random Graph Models (ergm). statnet provides a comprehensive framework for ergm-based network modeling, including tools for model estimation, model evaluation, model-based network simulation, and network visualization. This broad functionality is powered by a central Markov chain Monte Carlo (MCMC) algorithm. (Requires R)

Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.

GraphChi is a spin-off of the GraphLab ( ) -project from the Carnegie Mellon University. It is based on research by Aapo Kyrola ( and his advisors.

GraphChi can run very large graph computations on just a single machine, by using a novel algorithm for processing the graph from disk (SSD or hard drive). Programs for GraphChi are written in the vertex-centric model, proposed by GraphLab and Google’s Pregel. GraphChi runs vertex-centric programs asynchronously (i.e changes written to edges are immediately visible to subsequent computation), and in parallel. GraphChi also supports streaming graph updates and removal of edges from the graph. Section ‘Performance’ contains some examples of applications implemented for GraphChi and their running times on GraphChi.

The promise of GraphChi is to bring web-scale graph computation, such as analysis of social networks, available to anyone with a modern laptop. It saves you from the hassle and costs of working with a distributed cluster or cloud services. We find it much easier to debug applications on a single computer than trying to understand how a distributed algorithm is executed.

In some cases GraphChi can solve bigger problems in reasonable time than many other available distributed frameworks. GraphChi also runs efficiently on servers with plenty of memory, and can use multiple disks in parallel by striping the data.

Web Based Stuff:

Play amateur Gestapo from the comfort of your living room:

Search Professionals by Name, Company or Title, painfully verbose compared to the above 2 tools

Broad list of search engines


A tool that uses Palantir Government:

connected with the following datasets:
and some misc. others

Database Listings

Analytic Methods:


Morphological Analysis – A general method for non-quantified modeling

Modeling Complex Socio-Technical Systems using Morphological Analysis

CIA Tradecraft Manual

Top 5 Intelligence Analysis Methods: Analysis Of Competing Hypotheses
(the author scores a 4.4 of 5 on , 2.4 on the easiness scale)
Many new analysts find that getting started is the hardest part of their job. Stating the objective, from the consumer’s standpoint, is an excellent starting point. If the analyst cannot define the consumer and his needs, how is it possible to provide analysis that complements what the consumer already knows.

“Ambassador Robert D. Blackwill … seized the attention of the class of some 30 [intelligence community managers] by asserting that as a policy official he never read … analytic papers. Why? “Because they were nonadhesive.” As Blackwill explained, they were written by people who did not know what he was trying to do and, so, could not help him get it done:
“When I was working at State on European affairs, for example, on certain issues I was the Secretary of State. DI analysts did not know that–that I was one of a handful of key decision makers on some very important matters….”

More charitably, he now characterizes his early periods of service at the NSC Staff and in State Department bureaus as ones of “mutual ignorance”

“DI analysts did not have the foggiest notion of what I did; and I did not have a clue as to what they could or should do.”[6]
Blackwill explained how he used his time efficiently, which rarely involved reading general CIA reports. “I read a lot. Much of it was press. You have to know how issues are coming across politically to get your job done. Also, cables from overseas for preparing agendas for meetings and sending and receiving messages from my counterparts in foreign governments. Countless versions of policy drafts from those competing for the President’s blessing. And dozens of phone calls. Many are a waste of time but have to be answered, again, for policy and political reasons.

“One more minute, please, on what I did not find useful. This is important. My job description called for me to help prepare the President for making policy decisions, including at meetings with foreign counterparts and other officials…. Do you think that after I have spent long weeks shaping the agenda, I have to be told a day or two before the German foreign minister visits Washington why he is coming?”


Replying To John Robb on Drones, Self-Driving Cars

John’s post:

I spent some time on the phone with a reporter from Inc. Magazine last week.  We were discussing the future of entepreneurship and where new opportunity could be found.

He asked me about drones and if there were opportunities for entrepreneurs there.

I told him that there were only two places where drones were going to gain traction:

  • Security.   From military operations to intelligence gathering to police surveillance.
  • DiY.   People building their own drones and finding interesting ways to use them.

That’s it.


All of the other uses of drones are closed off due to legal restrictions:

  • Drones for passenger transport.  It’s pretty clear that drones could be used to transport passengers safely and at much less cost than a manned aircraft.  It won’t happen.  Too many legal implications and push back from unions.
  • Drones for private info gathering.  Currently prevented.  There’s going to be legal wrangling over this for decades, which will prevent an industry from forming (other than “security” related).
  • Drones for short haul delivery/transport.  Too difficult to overcome the legal ramifications of operating drones on a mass scale near to homes/buildings.  It will definitely be used in the military.

Much of same logic is going to be applied to other forms of autonomous robotics.  For example: robots can drive a car better than a human being.  Google proved that already with their mapping car.  Will it be common to see “automated” cars in the next decade.  Probably not.  The first person killed by one will kill the industry through lawfare. Link

My response:

I doubt it John. I think you have over-weighted the power of lawsuits versus innovation.

Usually when there are legal blockages to technology, it holds the technology back for a short time until it finds a way around it. See the transition from embryonic stem cells to skin stem cells.

For cars in particular, people want safety in their cars much more than energy efficiency. That’s part of the reason SUV’s have outsold the economy car designs, most of the models are much more secure, barring the extremely large ones that are vulnerable to tipping over.

Recently smaller SUVs that combine the two attributes have become the most popular design. The demand for safety in automobiles has always been extreme, and in the case of the Google car Sebastian Thrun has been extremely careful in making sure the cars don’t have any accidents even in testing.

People put more trust into tech companies like Google than they do the legal system or congress, by a wide margin:

Nevada has already legalized self driving cars, California is following.

So you assume that:
1. People’s desire for safety in consumer choices will be outweighed by their desire for control
2. There is no judo move to counter regulations, as there has been for many past technologies
3. That current friendly regulations are a smoke-screen for a coming crackdown
4. That people won’t trust the Google brand in particular in the case of the car
5. That a small number of lawsuits can destabilize a potentially multi-billion dollar industry, in spite of there being a perfect safety record thus far


Adjustable Taser/Pepper Spray Riot Shield By Bernardo Bajana

This shield takes several different concepts and chunks them into one. It’s close to a poorman’s exoskeleton. Most officer’s have a significant strength advantage over the people that do the rioting but an additional exoskeleton could be attached to provide short bursts of anaerobic power in repelling crowds. The filament that is used on this shield can be used to turn nearly anything into a touch taser.


economics marketing

What Marketing Simulations Haven’t Shown Me

Complex systems are built up by connecting diverse agents with interdependent relationships that change over time.

What all of the surveys, opinion polls, and other marketing data doesn’t tell us is the complex web of interactions that lead to a sale, or an individual forming an opinion. That’s the mistake – data from opinion polls measures current opinion when you need tomorrow’s concept. When you try to cross-correlate it, you are only looking at the output of multiple systems and mashing them together and expecting something meaningful. The key is to go back to the inputs, then move forward tracking the finite states of each agent inside of the system.

To put it another way, is a soccer mom going to buy an energy efficient car to save the environment, or a larger car that is obviously safer? Does anyone think that the majority of people, particularly women as they make the most purchase decisions now, would choose a car that obviously isn’t as safe for themselves and their family, including the children who ride along, just for the sake of an abstract concept of helping the environment? This is basic stuff from Drew Whitman’s Life Force 8. If you wish to map the future you have to move beyond the numbers into higher levels of abstraction while keeping in mind the nature of the agents and the connections between them. Add the finite effects of tools and resources on an agent’s environment and you have an idea of what could happen. And you don’t even need a computer to crunch the numbers.

The real fun begins when you can alter the simplest inputs of the agents.

SUV sales have actually been growing in recent months, according to CNNMoney, from about one in five vehicles sold back in the 1990s and early 2000s, to almost a third of all vehicles sold today.

The vast majority of SUVs sold today are actually smaller, more diminutive versions of their ancestors, and have fuel economy that’s as good or better than many passenger cars on the road. For instance, the Chevrolet Equinox gets better combined city and highway mileage than some models of the Honda Accord.

Their economic power is truly revolutionary, representing the largest market opportunity in the world. Just look at the numbers: Women control 65 percent of global spending and more than 80 percent of U.S. spending. By 2014, the World Bank predicts that the global income of women will grow by more than $5 trillion. In both emerging markets and developed nations, women’s power of influence extends well beyond the traditional roles of family and education to government, business, and the environment.


Replying To Ion-Tom On Ideas for the Ultimate Strategy Game

He wrote a long post, but here is the beginning:

I want to see procedural content not just create the raw terrain maps, but thousands of tribes, nations and empires. I want it to generate language, culture, aesthetics, architecture, religion, scientific progress, humanity. I believe that many different game engines should connect to a single cloud data application in order to create persistent worlds. Strategy gaming should prepare for next generation graphics technology, neural network AI, and implement many different portals to access game information. Taking exponential trends into consideration, I want to see what a strategy game looks like when millions of semi-intelligent “agents” compete or collaborate for resources.

I’m talking about Guns, Germs and Steel in gaming form. It could answer questions about human settlement patterns. With different continental configurations, do certain types of regions always become colonial powers or is having many states in feudal competition becoming market powers all that is needed? Does this usually follow the parallel latitude crop theory? The game could have an arcade mode like the Civilization and an observer mode: set the stage and watch. Go back in time, change a few variables and watch the difference. Maybe I’m alone, but that type of concept excites me!

My response:

So one of the first thing’s that came to mind was Palantir’s software:

And taking their idea of reducing friction between human and computer to enhance human capabilities:

Also, here’s an overview of some of the older high-dollar ABM projects that were implemented in the past, mostly using JAVA:

Combat will change over time like you said. One of the models being used in real life is generational, 1st to 4th generation warfare (see: John Robb, Global Guerrillas), with theoritical 5th generation warfare. So you would have to choose how realistic or metaphorical you would render that action. You know, if you had a civilization event where spearmen destroy a tank, or if you could zoom straight down and play 1st person as the spearman trying to exploit the terrain, things like that. Then you move into modern things like winning hearts and minds, then into the vague world of secrecy and influencing complex systems in 5GW.

Economics will change over time.

If you’re creating language, then you’re also creating the box within which people think to some degree, so that creates a feedback loop with culture. So you have a have to go a Systems Dynamics approach with stacks, flows and feedback loops, and possibly going much further because you have to extend the model so far out.

You will have to measure emotional reactions as well, but the question is how. Are you going to use the traditional valence and arousal model, to model how a given population is reacting to stimulus? I’m thinking back to OpenCog’s agent and it’s ability to become “scared”, or inquisitive because it uses an economic attention allocation model.

I don’t think Jared Diamond’s theory is going to cut it for explaining everything to the degree of simulating this world though from a historical perspective either. Diverse agents who are connected with interdependent relationships that adapt over time.

Bleh, that’s all I got right now.

Also, if you could do that, it might be realistic and immersive enough that people would pay you to test and develop it, like a subscription based MMORPG-ish thing.

See also: Game Mechanics: Advanced Game Design by Ernest Adams and Joris Dormans

It’s on amazon and “other” sites.

If you wanted to introduce more variety into the game, instead of following a fixed technology tree, perhaps it could implement different concepts of the singularity towards “end game”. You could shift currency over time, as processes become automated you would move from a coin based economy to a paper one, then from paper to digital currency, then possibly to currency based off of:

  1. Energy – This may or may not be relevant, on the one hand advanced solar panels and cells, combined with nuclear power makes energy abundant. It remains to be seen how much power future computing will eat up. Current power drain on large systems comes not only from the computer, but the costs of cooling. Intel has been working on low power mobile processors:
  2. Antimatter: Scientists claim that antimatter is the costliest material to make.[37] In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons[38] (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen.[37] This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators), and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss Francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions).[39] Several NASA Institute for Advanced Concepts-funded studies are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram.[40]
  3. If computers take over most of the operations in a society, costs can be based off of CPU cycles

This raises the question of whether space exploration would be useful in this context. Bill Stone has an old but good TED talk on space exploration and his on-going work to journey to the moon and his plan to mine the fuel to return home from the moon itself: 

In the computers rule everything dynamic, Hugo de Garis has some interesting ideas on “Artilects”:

One field no one mentions: Hardware security. Trying to beat computers through software is nice and all, but there are many hardware bugs, BIOS rootkits (technically software), using firewire to pass through security, USB sticks with malware and keyloggers (which can be built into a keyboard). Unless an AI/AGI has built in defense to make the hardware difficult to get at, to self destruct, or reprise against attackers, then they are a vulnerable target.

Eclipse Phase also has interesting implementations.

His reply:

I like everything about what you just said. I’m familiar with the Van Allen Belt antimatter and Hugo de Garis but I wasn’t familiar with Bill Stone, that’s pretty awesome! And the Adams-Dormans book looks awesome! Currently I’m reading this book my friend lent me. (Glad I didn’t have to pay for it!)

I am a big proponent of a genetic algorithm based tech tree that builds momentum towards a singularity end game. Not sure how it could be implemented at first. In an ideal world the engine is sufficiently advanced to model physics and the agents experiment with the physics engine. For the short term, I think breaking all technologies into their basic components based on physics would give a good “lego set” for building technologies. Each component could have a weight. Every time it gets used in a technology it gets a stronger weight. Etc. The momentum builds.

So you think hardware security makes AGI vulnerable? I suppose it’s an engineering question. Right now it’s vulnerable, but so were the worlds first single cell organisms. I’ll bet security increases over time as neuromorphic chips become more complex; maybe not though.


Replying To Nicholas Eftimiades On Intelligence at the Speed of Thought

The link to his original post is here:

For future national security needs, the most stressing intelligence requirements will be for remote-sensing systems to detect, track, cross-que, and characterize fleeting targets in real time. This ability will require a global network of sensors to detect and track individuals, vehicles, chemicals, materials, and emanations and a space network backbone to move data.  Pervasive CCTV systems now present worldwide in airports, border crossings, railroads, busses, and on the streets of many cities will be integrated and supported by powerful computers, smart software agents, vast facial pattern and retina recognition databases, and communications infrastructure. These systems will be integrated with sensors and databases detecting, identifying, and characterizing spectral signatures, chemical compositions, DNA, effluents, sounds, and much more.

My Response:

There’s an interesting piece of open source software called Eureqa that can search for hidden mathematical equations in data. It’s amazing how quickly supervised and unsupervised learning algorithms along with gesture recognition are developing.

Over time we’ll develop better long-range sensors to detect emotional valence and arousal, so we can judge the details of a persons emotional state and correlate it with the rest of the data. Thermal and hyperspectral imaging can be used to judge the bloodflow to an area like the face, indicating stress. We have simple things, EPS and heartbeat sensors, eye tracking software, but it’s developing over time. Simple Microsoft Kinect sensors just build a simple stick-figure skeleton, but newer sensors are being developed that have more potential. This input will likely improve agent based modeling software as we will be able to have actual emotions as inputs.

Satellite launch costs are going down and NASA is turning LEO over to the private sector, so we can expect an increase in space-based sensors and services. That might lead to better climate detection models and therefore better advanced hurricane/tornado warning times.

It also applies to AGI research, much of the data we learn from comes in from vision among other senses. So from the perspective of building an AGI, adding more sensors means it can get smarter in much faster and in entirely new ways. When you talk about integrating that level of sensory information and processing it, you end up with intelligence that makes the differences between an Einstein and a village idiot seem as tiny as a grain of sand.

We can already load sounds into programs like Wolfram Mathematica and analyze them, extract data and then plot, graph or connect the data in hundreds of other ways. I’m not as familiar with MatLab but I know it has a wide range of functions too.

Right now the main concern is reducing interface friction so humans and machines can work together properly, but every year more functions are being added to the software and more data is being captured. Eventually we’re going to need a significant step up in intelligence to be able to work with it.

Thinking more on it, the alternative may be that things will get easier to use.

Programming languages have become somewhat simpler over time, as compilers catch up with being able to handle memory as good or better than humans going into things at the C/C++ level won’t be required, as long as there aren’t incompatibility issues. GUI’s have gotten better over the years as well.

I wonder how humans will choose to control access and connections between their AI/AGI programs as time goes on. The newer generation isn’t as concerned about privacy and are willing to give out tons of data on twitter and facebook.

Another area that’s still very empty: implant security. Most of these things can pick up wireless signals now, hackers have already figured out ways to mess with pacemakers and the like. Ditto for self-driving cars, we’ve already had guys hacking GPS’s to make them give false data. We’re going to have attack v. defense issues, security versus accessibility, the works.

(Technical note: Kinect skeleton drawing is done with the software, but improvements to hardware will effect it’s capabilities)

His response:

I agree with most of what you wrote but I don’t think lowering launch is going to lead to better climate detection models. Those space based sensors are excellent now. That is more a function of computing power and airborne/ground based sensors. And an increase in space-based sensors and services is going to be more a function of electronics miniaturization allowing more capability in orbit for the same launch price. But I agree launch costs will come down as well.

Also, Thermal and hyperspectral imaging can be used to judge the bloodflow to an area like the face but it is only useful if you have the spectral signature of that specific face at rest and under stress. Either that, or you have continuous monitoring and can watch the blood flows go up and down.

Implant security is an areas of concern. A UK college professor recently demonstrated infecting numerous devices with an imbedded bio chip.

Cool discussion.


Quantum Computing Power Rising Exponentially

Quantum computing essentially makes all of our current encryption standards useless. Researchers have long been discussing whether Artificial General Intelligence will require quantum computers, but by the time AGI stuff filters into the general public it will likely run on a quantum machine by default. The biggest gains will come from integrating this with sensor nets, which will act as the eyes and ears for our computer systems in the physical world which will be coupled with supervised and unsupervised learning algorithms instead of a reliance on making advances in software by programming alone. Machines are already able to infer mathematical patterns from data and design their own experiments.

Right now quantum computing is being used for large scale projects, by companies like Lockheed:
Lockheed Martin Corporation has agreed to purchase the first D-Wave One quantum computing system from D-Wave Systems Inc., according to D-Wave spokesperson Ann Gibbon.

Lockheed Martin plans to use this “quantum annealing processor” for some of Lockheed Martin’s “most challenging computation problems,” according to a D-Wave statement.

D-Wave computing systems address combinatorial optimization problems.that are “hard for traditional methods to solve in a cost-effective amount of time.”

These include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing, and bioinformatics.

Or to work on infamously difficult problems in academia, like protein folding:

A team of Harvard University researchers, led by Professor Alan Aspuru-Guzik, have used Dwave’s adiabatic quantum computer to solve a protein folding problem. The researchers ran instances of a lattice protein folding model, known as the Miyazawa-Jernigan model, on a D-Wave One quantum computer.

The research used 81 qubits and got the correct answer 13 times out of 10,000. However these kinds of problems usually have simple verification to determine the quality of the answer. So it cut down the search space from a huge number to 10,000. Dwave has been working on a 512 qubit chip for the last 10 months. The adiabatic chip does not have predetermined speed up amounts based on more qubits and depends upon what is being solved but in general the larger number of qubits will translate into better speed and larger problems that can be solved. I interviewed the CTO of Dwave Systems (Geordie Rose back in Dec, 2011). Usually the system is not yet faster than regular supercomputers (and often not faster than a desktop computer) for the 128 qubit chip but could be for some problems with the 512 qubit chip and should definitely be faster for many problems with an anticipated 2048 qubit chip. However, the Dwave system can run other kinds of algorithms and solutions which can do things that regular computers cannot. The system was used by Google to train image recognition systems to remove outliers in an automated way.

However it’s likely that in 5-10 years, as a conservative estimate, it will move into the consumer marketplace.

LONDON – Quantum computing has been brought a step closer to mass production by a research team led by scientists from the University of Bristol that has made a transition from using glass to silicon.

The Bristol team has been demonstrating quantum photonic effects in glass waveguides for a number of years but the use of a silicon chip to demonstrate photonic quantum mechanical effects such as superposition and entanglement, has the advantage of being a match to contemporary high volume manufacturing methods, the team claimed.

This could allow the creation of hybrid circuits that mix conventional electronic and photonic circuitry with a quantum circuit for applications such as secure communications.

edit: Here is a comment from a PhD candidate in Physics:

Wave’s quantum computer is an adiabatic quantum computer designed to solve optimization problems, not perform universal computations. It’s architecture is not compatible with running algorithms based on the circuit model, which include all the fabled cryptography beating algorithms based on fast factoring (Shor’s algorithm).

In any case, as Michael points out, 128 qubits is certainly not enough to decrypt traditional cryptosystem and there is some dispute about exactly how “quantum” their computer really is, although their Nature paper has alleviated some of these concerns. At this point, D-Wave’s computer is more relevant as a proof of principle than as an actual computational device. Lockheed Martin probably bought theirs to insure they will be on the ground floor if this thing takes off.


Replying to Iskandaar on Iranian Wargames

First the original post:

A war game organized by Kenneth Pollack of the Brookings Institute’s Saban Center for Middle East Policy was conducted to examine the reactions of both the United States and Iran during escalated events concerning Iran’s nuclear program and the United States’ reactions to attacks by Iran, as reported by David Ignatius of the Washington Post.

Lessons from an Iranian war game

Of interest, and what Mr Ignatius pointed out is that

“The game showed how easy it was for each side to misread the other’s
signals. And these players were separated by a mere corridor in a
Washington think tank, rather than half a world away.

This highlights one of the greatest problems we currently face in dealing with not only Iran, but other countries within the Middle East and Asia: A mere vague grasp on how our perceptions and personal biases can distort the intentions and actions of state actors. While we usually tend to embrace the idea of cultural awareness on a superficial level, this game highlights (in some what exaggerated terms) the inability for hypothetical leaders to interpret actions of our opponents correctly.

These “small miscalculations” ended a scenario in a likely war outcome, which could have been avoided had more diplomatic interactions possibly occurred. It begs to question then how much political face leaders in the US would attempt to preserve in a real world situation like this, and attempt to publicly retaliate against Iran, versus attempting to identify the problem and tackle it in diplomatic channels.

There needs to be more scenarios like this, more dry runs, more rehearsals, not only with actual government participants, but with other countries as well. This scenario provided an in depth lesson, one that could be used to prevent us from making rash and damaging decisions in a real world scenario.

Later he replied to one of my comments and posed this question:
What we determined in hindsight to the 9-11 attacks as far as the intel community (NPR did a great piece on this) was the inability for our analysts at various levels to effectively use critical thinking for problem solving of various problems. For example, analysts were effectively good at recieving raw data and turning out products, but looking at long range and various order of effects that the data had on the bigger picture were completely lost to a majority of them.I believe that scenarios and training like this stimulates them, helps figure out where the fault lies, and will allow us to approach problem sets with a larger mind set. Thoughts?
This is my reply:

Team size and total time spent working directly effect people on the bottom and mid-level’s ability to do good analysis and synthesis. Big picture thinking is difficult when operating under long hours, or in large groups where information becomes simplified and groupthink takes hold to keep everyone on the same page.

Click to access Rules%20of%20Productivity.pdf

I definitely agree that these scenarios are what’s needed, good exercises shouldn’t have all of the data explicitly mentioned, or sometimes even implicitly included in their outline.

Instead of solving for X, they are forced to try to make sense of relationships between things that change over time and require exploration, not just analysis.

Intelligence isn’t just about prediction, but also minimizing surprise. It requires exploration, and exploration usually leads you into a lot of dead-ends.

The CIA put out a good paper on Intelligence Analysis Tradecraft awhile back,

Click to access Tradecraft%20Primer-apr09.pdf

One of the big things that sticks out is that intelligence failures often come from bad assumptions that go unchallenged. Either because they are implicit and not analyzed or because the individual cannot address alternative perspectives. Shifting perspectives like this require adaptability in thinking.

Why not have an analyst write out all of their implicit and explicit assumptions that go into their analysis, and then trying to invert them to see if they still make sense? What kind of divergent thinking software have they designed for analysts? Stop and think about the ungodly amount of computing power the NSA and CIA have, now think about how to use it to get a better effect.

The other thing that sticks out is that the CIA Tradecraft paper recommends 10-12 people, while I’ve seen other papers that show the top out limit where group-think takes hold starting at 8-10.

I wonder if they tested cutting the groups down into smaller sized brain-storming sessions, and then doing a scrum of scrums to take the leaders of the smaller groups for another small brainstorming session after the first run?

Finally, the running strategy for the last 60 years or so for nearly every opfor has been to bleed US forces of cash and morale until they get tired of fighting.

All of the options mentioned by the blue team involve doing things that play into this. The blue team seemed to be ultimately reactive in this strategic context, mostly dependent on what the opfor decides to do.

All of their solutions involve doing things that will agitate the Iranian leadership. This can work if you can successfully go one step higher in violence than your opponent is ready to escalate at that point in time, but they haven’t defined any threshold for what the Iranians are ready to go to. And their willingness to move up to a higher threshold will change over time. Instead they just create a positive feedback loop.


Cyberwar : East Asians Versus Eastern Europeans

Estonian’s have started teaching kids to code from age 6 and up:

Just look at Estonia, the tiny Eastern European nation (population 1.3 million), where a new project is being put in place with the ambition of getting every six year old to learn coding at school.

The “ProgeTiiger” scheme, according to reports, will begin pilots this year with the ambition of getting school kids of all ages to start coding. There’s no suggestion yet that the classes will be mandatory, but the organization behind the move the Tiger Leap Foundation, says it wants to produce more creative computer users.

Some Eastern European countries definitely have a strong computer culture.

China does as well, but that is usually wrapped around the purpose of gaining trade secrets. The industries and team sizes are quite different.

In a report entitled ‘Peter the Great vs. Sun Tzu‘, Tom Kellermann, vice president of cyber security at Trend Micro, compared hackers from the two regions according to their focus, organization and the sophistication of their malware and infrastructure. His conclusion – the Eastern Europeans are far more insidious and strategic.


On the other hand, fewer anti-debugging techniques are used by East Asian hackers, who are more interested in speed and productivity. Backdoors also tend to be simpler, he noted, stating that “East Asian malware is thrown together quickly using already-existing components.”

“East Asian hackers on the other hand tend to use cheap, hosted infrastructure usually from mass ISPs that are easy to set up and manage,” he said in the report. “They are not necessarily concerned with being identified as the attacker as they do not go great lengths to hide their tracks like the East European hackers do.  This was shown in the recent LuckyCat incident which was traced back to Sichuan University which is a known training school for East Asian military.”

While East Asian groups tend to work for other organizations interested in their skills, hackers from Eastern Europe generally operate in small, independent units, and are focused on profit, he wrote. Their infrastructure tends to be developed by them specifically for their own use in attacks.

“They (Eastern European groups] tend to want to be in control of their entire infrastructure and will routinely set up their own servers for use in attacks, develop their own DNS servers to route traffic and create sophisticated traffic directional systems used in their attacks,” according to the report. “If they do go outside, they will carefully select bulletproof hosters to support their infrastructure. It is their hallmark to maintain control of the whole stack similar to the business models pioneered by Apple.”

“In general, the East Asian hackers are not at the same skill level of maturity as their East European counterparts,” Kellermann concluded. “The East European’s are master craftsmen who have developed a robust economy of scale which serves as an arms bazaar for a myriad of cyber munitions and bulletproof hosting infrastructures,” he said. Comparing the two to real-world military tactics, Kellermann added that East European hackers act like snipers when they launch campaigns, whereas the East Asian hackers tend to colonize entire ecosystems via the “thousand grains of sand approach”. Link

The OP is here

Here is a study of the LuckCat incident


Memory Implants, Mind Reading & Facial Recognition Technology

I doubt Edward Bernays could of imagined a world where you can whisper into a customer’s ear, or be capable of mining and processing data to find incredibly obscure patterns and correlating them with buying behaviors. My greatest fear is not that these technologies will be used against us, we have already started the arms race against the unprotected mind. It’s difficult to tell if the defenses when developed will be one of hardening targets or simply creating a pact of mutually assured destruction.

Previous posts have shown how easy it is, using open source data, to predict the outcomes of politically unstable events or to determine the likelihood of riots. My fear is that my failure’s of imagination will keep me from understanding the full and varied potentials of it’s use.

Tests in 2010 showed that the best algorithms can pick someone out in a pool of 1.6 million mugshots 92 per cent of the time. It’s possible to match a mugshot to a photo of a person who isn’t looking at the camera too. Algorithms such as one developed by Marios Savvides’s lab at Carnegie Mellon can analyse features of a front and side view set of mugshots, create a 3D model of the face, rotate it as much as 70 degrees to match the angle of the face in the photo, and then match the new 2D image with a fairly high degree of accuracy. The most difficult faces to match are those in low light. Merging photos from visible and infrared spectra can sharpen these images, but infrared cameras are still very expensive.

Of course, it is easier to match up posed images and the FBI has already partnered with issuers of state drivers’ licences for photo comparison. Link

The space between eyes has always been one of the key factors behind facial recognition because it can’t be altered. Right now bionic eyes can only produce crude grayscale images, but eventually they may be used to fool biometric ID systems.

Memories are often grouped into two categories: declarative memory, the short and long-term storage of facts like names, places and events; and implicit memory, the type of memory used to learn a skill like playing the piano.

In their study, the researchers sought to better understand the mechanisms underlying short-term declarative memories such as remembering a phone number or email address someone has just shared.

Using isolated pieces of rodent brain tissue, the researchers demonstrated that they could form a memory of which one of four input pathways was activated. The neural circuits contained within small isolated sections of the brain region called the hippocampus maintained the memory of stimulated input for more than 10 seconds. The information about which pathway was stimulated was evident by the changes in the ongoing activity of brain cells. Link

There is a large gap from insect brain’s to a rat’s, then from rat to chimp, and finally to human. This gap will take a long time to close. But when we do simulate the human brain and it’s processes implantation of memories will likely become much easier as well. In this case we don’t even need to simulate the human brain itself, just one of it’s functions.

A team of security researchers from Oxford, UC Berkeley, and the University of Geneva say that they were able to deduce digits of PIN numbers, birth months, areas of residence and other personal information by presenting 30 headset-wearing subjects with images of ATM machines, debit cards, maps, people, and random numbers in a series of experiments. The paper, titled “On the Feasibility of Side-Channel Attacks with Brain Computer Interfaces,” represents the first major attempt to uncover potential security risks in the use of the headsets.

“The correct answer was found by the first guess in 20% of the cases for the experiment with the PIN, the debit cards, people, and the ATM machine,” write the researchers. “The location was exactly guessed for 30% of users, month of birth for almost 60% and the bank based on the ATM machines for almost 30%.” Link

The lowest hanging fruit for hackers has always been humans. Kevin Mitnick almost exclusively used social engineering, because manipulating the social networks was much more effective than directly attacking hardened security protocols put in places by security professionals. Expect entirely new hardened systems to be creating around protecting and filter thoughts.

I fear a failure of imagination more than the atrocities that can come from this technology.