"Pay my troops no mind; they're just on a fact-finding mission."

Tag Archives: sigint

Open Source Intelligence Analysis – Demographics

Getting good demographics can help you to quickly understand the context of messages that circulate through different websites. The easiest method for large websites is to look them up on

For instance, we an obvious pattern that hispanics and blacks tend to visit conspiracy websites much more often than whites. We also see that many of the sites tend to have older visitors with higher incomes. The exception are data driven websites like Wikileaks, which tend to be lower income but highly educated viewers who are mostly white or asian.

If you can find facebook groups for websites like this, you can cross-check some of the basic information by looking at user photos, names, and ages (keep in mind that facebook users tend to be younger than average):

To get a quick introduction to the character of a website, simply do an imagesearch of it on google, e.g.: site:

Search through the websites looking for mentions of states and/or cities using google, e.g.: texas (don’t add a space between the search command and the website, so use Look for introduction threads or user profiles that list locations. Twitter accounts can also assist in this process.

With this information you can cross-correlate the cities members live in to get an idea of their general make-up, and how it compares to other demographic sources.

If there are a lot of unique images on the website, use google’s image search function to look around for other websites with the same images, which will expand your understanding of the psychographics of the users by finding similar sites and images.

For more google search ideas, look at “How to solve impossible problems: Daniel Russell’s awesome Google search technique”:

If you want to map out keywords and connections, use a graph similar to this:

A basic search gives us something like this:

Which shows us that we can also harvest data from youtube and amazon, as well as the smaller linked websites.

Now we have the basic demographics, we look for commonalities. Search through abstracts of psychology journals using or google scholar, looking for keywords related to conspiracy theories, demographic information and psychology journals.

We end up with some curious things like this:

This article examines the endorsement of conspiracy beliefs about birth control (e.g., the belief that birth control is a form of Black genocide) and their association with contraceptive attitudes and behavior among African Americans. The authors conducted a telephone survey with a random sample of 500 African Americans (aged 15-44). Many respondents endorsed birth control conspiracy beliefs, including conspiracy beliefs about Black genocide and the safety of contraceptive methods. Stronger conspiracy beliefs predicted more negative attitudes toward contraceptives. In addition, men with stronger contraceptive safety conspiracy beliefs were less likely to be currently using any birth control. Among current birth control users, women with stronger contraceptive safety conspiracy beliefs were less likely to be using contraceptive methods that must be obtained from a health care provider. Results suggest that conspiracy beliefs are a barrier to pregnancy prevention. Findings point to the need for addressing conspiracy beliefs in public health practice.


This study used canonical correlation to examine the relationship of 11 individual difference variables to two measures of beliefs in conspiracies. Undergraduates were administered a questionnaire that included these two measures (beliefs in specific conspiracies and attitudes toward the existence of conspiracies) and scales assessing the 11 variables. High levels of anomie, authoritarianism, and powerlessness, along with a low level of self-esteem, were related to beliefs in specific conspiracies, whereas high levels of external locus of control and hostility, along with a low level of trust, were related to attitudes toward the existence of conspiracies in general. These findings support the idea that beliefs in conspiracies are related to feelings of alienation, powerlessness, hostility, and being disadvantaged. There was no support for the idea that people believe in conspiracies because they provide simplified explanations of complex events.


From this information we can break them into traditional psychographics using stock models:

Now you can create a database that can be used for advanced analytic operations, using R, excel, SAS or a programming language like Python. R tends to be more effective for smaller sets less than 2GB because of it’s memory usage, but it has nearly all statistical functions anyone has thought to use which makes it very useful for experimental projects. SAS is commercial software that is mainly effective for large data sets. Excel is a decent entry level solution. Python is not quite as flexible as R yet, but it’s modules are improving and it can be interfaced with R.

Open Source Intelligence Analysis – We NSA Now

Working Thoughts:

1. Wikileaks can act as a secondary database. What we’ve seen so far makes it clear that most of the classified material is common knowledge but it could be useful.
2. Robert Steele is right that the humanitarian goodwill approach is superior. We’ve spent a lot of money in Afghanistan, but most of it was spent in unpopulated areas that were safe, the people who needed it didn’t get it. Lots of corruption. A tighter approach could be made.
3. Fiverr and penpal sites can also be useful for general cultural understanding or simple local tasks, e.g. :
4. Nearly all current prediction markets operate as zero-sum or negative-sum markets.

More OSINT Links:

“Dradis is a self-contained web application that provides a centralised repository of information to keep track of what has been done so far, and what is still ahead.”

Links for OSINT (Open Source Intelligence) by Randolph Hock

City Data:

Public Records:

Name/Location Search Engine:

“creepy is an application that allows you to gather geolocation related information about users from social networking platforms and image hosting services. The information is presented in a map inside the application where all the retrieved data is shown accompanied with relevant information (i.e. what was posted from that specific location) to provide context to the presentation.”

Here is a recent example that uses the Palantir platform and OSINT:

Less than four months ago, the Southern portion of Sudan seceded and formed South Sudan, only the 5th country to be created this century. In this session, we will demonstrate how Palantir can draw from a plethora of Open Source Intelligence (OSINT) data sources (including academic research, blogs, news media, NGO reports and United Nations studies) to rapidly construct an understanding of the conflict underlying this somewhat anomalous 21st Century event. Using a suite of Palantir Helpers developed for OSINT analysis, the video performs relational, temporal, statistical, geospatial, and social network analysis of over a dozen open sources of data.

See also:

Detecting Emergent Conflicts through Web Mining and Visualization



Open Source Intelligence Analysis – Palantir Does Indeed Kick Ass

Messing around with the Palantir Government suite right now. You can get an account and mess around with it here:

You have the ability to import/export data, filter access, set up collaborative teams and access to the open archives of the US Gov and some non profits. There are two tiers of users, novice users and power users:

Workspace Operations
Restrictions for Novice Users
Importing data

Novice users can only import data that is correctly mapped to the deployment ontology. Power users are exempt from this restriction.

The maximum number of rows in structured data sources that a Novice user can imported at one time is restricted by the NOVICE_IMPORT_STRUCTURED_MAX_ROWS system property. The default value for this property is 1000.

The maximum size of unstructured data sources that can be imported by a Novice user at one time is restricted by the NOVICE_IMPORT_UNSTRUCTURED_MAX_SIZE_IN_MB system property. The default value for this property is 5 megabytes.
Tagging text

The maximum number of tags that a Novice user can create using the Find and Tag helper is restricted by the system property NOVICE_FIND_AND_TAG_MAX_TAGS. The default setting for this property is 50.

Novice users cannot access the Tag All Occurrences in Tab option in the Browser’s Tag As dialog.
SearchAround search templates

Novice users cannot import SearchAround Templates from XML files.

Novice users cannot publish SearchAround templates for use by the entire deployment, and cannot edit published templates.
All other SearchAround features remain available.
Resolving Nexus Peering data conflicts
The Pending Changes application is available only in the Palantir Enterprise Platform, and is only accessible to Workspace users who belong to the Nexus Peering Data Managers user group.
Nexus Peering Data Managers use the Pending Changes application to check for, analyze, and resolve data conflicts that are not automatically resolved when a local nexus is synchronized with a peered nexus.
Deleting objects

Novice users cannot delete published objects.

Novice users cannot delete objects created or changed by other users.
Resolving objects

The maximum number of objects that Novice users can resolve together at one time is restricted by the NOVICE_RESOLVE_MAX_OBJECTS system property. This restriction does not apply to objects resolved by using existing object resolution suites in the Object Resolution Wizard or during data import.

Novice users may use the Object Resolution Wizard only when using existing object resolution suites. Novice users cannot perform Manual Object Resolution, and cannot record new resolution criteria as an Object Resolution Suite.
To learn more, see Resolving and Unresolving Objects in Workspace: Beyond the Basics.
Map application restrictions
All map metadata tools in the Layers helper are restricted.
Novice users cannot access features that allow sorting of layers by metadata, coloring by metadata, or the creation of new metadata. All other Layer helper functions remain available.

In case you didn’t get what I just said, you have access the same tools the FBI and CIA use, except some minor limitations and no access to classified documents. If you have access to Wolfram Alpha/Mathematica and can google for history on your topic of interest then most of the classified files will become redundant.

What about data mining on a budget?

Consider relying on a GPU(s). A CPU is designed to be multitasker that can quickly switch between actions, whereas a Graphical Processing Unit(GPU) is designed to do the same calculations repetitively while giving large increases in performance. The stacks in the listed papers, while giving exponentially higher speeds, did not use modern designs or graphics cards, which hindered them from running even faster.

The GPU (Graphics Prossessing Unit) is changing the face of large scale data mining by significantly speeding up the processing of data mining algorithms. For example, using the K-Means clustering algorithm, the GPU-accelerated version was found to be 200x-400x faster than the popular benchmark program MimeBench running on a single core CPU, and 6x-12x faster than a highly optimised CPU-only version running on an 8 core CPU workstation.

These GPU-accelerated performance results also hold for large data sets. For example in 2009 data set with 1 billion 2-dimensional data points and 1,000 clusters, the GPU-accelerated K-Means algorithm took 26 minutes (using a GTX 280 GPU with 240 cores) whilst the CPU-only version running on a single-core CPU workstation, using MimeBench, took close to 6 days (see research paper “Clustering Billions of Data Points using GPUs” by Ren Wu, and Bin Zhang, HP Laboratories). Substantial additional speed-ups are expected were the tests conducted today on the latest Fermi GPUs with 480 cores and 1 TFLOPS performance.

Over the last two years hundreds of research papers have been published, all confirming the substantial improvement in data mining that the GPU delivers. I will identify a further 7 data mining algorithms where substantial GPU acceleration have been achieved in the hope that it will stimulate your interest to start using GPUs to accelerate your data mining projects:

Hidden Markov Models (HMM) have many data mining applications such as financial economics, computational biology, addressing the challenges of financial time series modelling (non-stationary and non-linearity), analysing network intrusion logs, etc. Using parallel HMM algorithms designed for the GPU, researchers (see cuHMM: a CUDA Implementation of Hidden Markov Model Training and Classification by Chaun Lin, May 2009) were able to achieve performance speedup of up to 800x on a GPU compared with the time taken on a single-core CPU workstation.

Sorting is a very important part of many data mining application. Last month Duane Merrill and Andrew Grinshaw (from University of Virginia) reported achieving a very fast implementation of the radix sorting method and was able to exceed 1G keys/sec average sort rate on an the GTX480 (NVidia Fermi GPU). See

Density-based Clustering is an important paradigm in clustering since typically it is noise and outlier robust and very good at searching for clusters of arbitrary shape in metric and vector spaces. Tests have shown that the GPU speed-up ranged from 3.5x for 30k points to almost 15x for 2 million data points. A guaranteed GPU speedup factor of at least 10x was obtained on data sets consisting of more than 250k points. (See “Density-based Clustering using Graphics Processors” by Christian Bohm et al).

Similarity Join is an important building block for similarity search and data mining algorithms. Researchers using a special algorithm called Index-supported similarity join for the GPU to outperform the CPU by a factor of 15.9x on 180 Mbytes of data (See “Index-supported Similarity Join on Graphics Processors” by Christian Bohm et al).

Bayesian Mixture Models has applications in many areas and of particular interest is the Bayesian analysis of structured massive multivariate mixtures with large data sets. Recent research work (see “Understanding the GPU Programming for Statistical Computation: Studies in Massively Massive Mixtures” by Marc Suchard et al.) has demonstrated that an old generation GPU (GeForce GTX285 with 240 cores) was able to achieve a 120x speed-up over a quad-core CPU version.

Support Vector Machines (SVM) has many diverse data mining uses including classification and regression analysis. Training SVM and using them for classification remains computationally intensive. The GPU version of a SVM algorithm was found to be 43x-104x faster than SVM CPU version for building classification models and 112x-212x faster over SVM CPU version for building regression models. See “GPU Accelerated Support Vector Machines for Mining High-Throughput Screening Data” by Quan Liao, Jibo Wang, et al.

Kernel Machines. Algorithms based on kernel methods play a central part in data mining including modern machine learning and non-parametric statistics. Central to these algorithms are a number of linear operations on matrices of kernel functions which take as arguments the training and testing data. Recent work (See “GPUML: Graphical processes for speeding up kernel machines” by Balaji Srinivasan et al. 2009) involves transforming these Kernel Machines into parallel kernel algorithms on a GPU and the following are two example where considerable speed-ups were achieved; (1) To estimate the densities of 10,000 data points on 10,000 samples. The CPU implementation took 16 seconds whilst the GPU implementation took 13ms which is a significant speed-up will in excess of 1,230x; (2) In a Gaussian process regression, for regression 8 dimensional data the GPU took 2 seconds to make predictions whist the CPU version took hours to make the same prediction which again is a significant speed-up over the CPU version.

If you want to use the GPUs but you do not want to get your hands “dirty” writing CUDA C/C++ code (or other languages bindings such as Python, Java, .NET, Fortran, Perl, or Lau) then consider using MATLAB Parallel Computing Toolbox. This is a powerful solution for those who know MATLAB. Alternatively R now has GPU plugins. A subsequent post will cover using MATLAB and R for GPU accelerated data mining.

These are space whales flying through the sun:

Open Source Intelligence Analysis – Software, Methods, Resources

Research firm Applied Research Associates has just launched a website, Global Crowd Intelligence, that invites the public to sign up and try their hand at intelligence forecasting, BBC Future reports.

The website is part of an effort called Aggregative Contingent Estimation, sponsored by the Intelligence Advanced Research Projects Activity (Iarpa), to understand the potential benefits of crowdsourcing for predicting future events by making forecasting more like a game of spy versus spy.

The new website rewards players who successfully forecast future events by giving them privileged access to certain “missions,” and also allowing them to collect reputation points, which can then be used for online bragging rights. When contributors enter the new site, they start off as junior analysts, but eventually progress to higher levels, allowing them to work on privileged missions.

The idea of crowdsourcing geopolitical forecasting is increasing in popularity, and not just for spies.  Wikistrat, a private company touted as “the world’s first massively multiplayer online consultancy,” was founded in 2002, and is using crowdsourcing to generate scenarios about future geopolitical events. It recently released a report based on a crowdsourced simulation looking at China’s future naval powers.

Warnaar says that Wikistrat’s approach appears to rely on developing “what-if scenarios,” rather than attaching a probability to a specific event happening, which is the goal of the Iarpa project.

Paul Fernhout put together a good open letter awhile back on the need for this, it seems IARPA has put some effort forward for this purpose:

A first step towards that could be for IARPA to support better free software tools for “crowdsourced” public intelligence work involving using a social semantic desktop for sensemaking about open source data and building related open public action plans from that data to make local communities healthier, happier, more intrinsically secure, and also more mutually secure. Secure, healthy, prosperous, and happy local (and virtual) communities then can form together a secure, healthy, prosperous, and happy nation and planet in a non-ironic way. Details on that idea are publicly posted by me here in the form of a Proposal Abstract to the IARPA Incisive Analysis solicitation: “Social Semantic Desktop for Sensemaking on Threats and Opportunities”

So what kind of tools can an amateur use for making sense of data?

Data Mining and ACH

Here is a basic implementation of ACH:

Analysis of Competing Hypotheses (ACH) is a simple model for how to think about a complex problem when the available information is incomplete or ambiguous, as typically happens in intelligence analysis. The software downloadable here takes an analyst through a process for making a well-reasoned, analytical judgment. It is particularly useful for issues that require careful weighing of alternative explanations of what has happened, is happening, or is likely to happen in the future. It helps the analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult. ACH is grounded in basic insights from cognitive psychology, decision analysis, and the scientific method. It helps analysts protect themselves from avoidable error, and improves their chances of making a correct judgment.

RapidMiner – About 6% of data miners use it – Can use R as an extension with a GUI

R – 46% of data miners use this – in some ways better than commercial software – I’m not sure what the limit of this software is, incredibly powerful

Network Mapping

Multiple tools – Finding sets of key players in a network – Cultural domain analysis – Network visualization – Software for analyzing ego-network data – Software package for visualizing social networks

NodeXL is a free, open-source template for Microsoft® Excel® 2007 and 2010 that makes it easy to explore network graphs. With NodeXL, you can enter a network edge list in a worksheet, click a button and see your graph, all in the familiar environment of the Excel window.

Stanford Network Analysis Platform (SNAP) is a general purpose, high performance system for analysis and manipulation of large networks. Graphs consists of nodes and directed/undirected/multiple edges between the graph nodes. Networks are graphs with data on nodes and/or edges of the network.

*ORA is a dynamic meta-network assessment and analysis tool developed by CASOS at Carnegie Mellon. It contains hundreds of social network, dynamic network metrics, trail metrics, procedures for grouping nodes, identifying local patterns, comparing and contrasting networks, groups, and individuals from a dynamic meta-network perspective. *ORA has been used to examine how networks change through space and time, contains procedures for moving back and forth between trail data (e.g. who was where when) and network data (who is connected to whom, who is connected to where …), and has a variety of geo-spatial network metrics, and change detection techniques. *ORA can handle multi-mode, multi-plex, multi-level networks. It can identify key players, groups and vulnerabilities, model network changes over time, and perform COA analysis. It has been tested with large networks (106 nodes per 5 entity classes).Distance based, algorithmic, and statistical procedures for comparing and contrasting networks are part of this toolkit.

NetworkX is a Python language software package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.

Social Networks Visualizer (SocNetV) is a flexible and user-friendly tool for the analysis and visualization of Social Networks. It lets you construct networks (mathematical graphs) with a few clicks on a virtual canvas or load networks of various formats (GraphViz, GraphML, Adjacency, Pajek, UCINET, etc) and modify them to suit your needs. SocNetV also offers a built-in web crawler, allowing you to automatically create networks from all links found in a given initial URL.

SUBDUE is a graph-based knowledge discovery system that finds structural, relational patterns in data representing entities and relationships. SUBDUE represents data using a labeled, directed graph in which entities are represented by labeled vertices or subgraphs, and relationships are represented by labeled edges between the entities. SUBDUE uses the minimum description length (MDL) principle to identify patterns that minimize the number of bits needed to describe the input graph after being compressed by the pattern. SUBDUE can perform several learning tasks, including unsupervised learning, supervised learning, clustering and graph grammar learning. SUBDUE has been successfully applied in a number of areas, including bioinformatics, web structure mining, counter-terrorism, social network analysis, aviation and geology.

A range of tools for social network analysis, including node and graph-level indices, structural distance and covariance methods, structural equivalence detection, p* modeling, random graph generation, and 2D/3D network visualization.(R based) … index.html

statnet is a suite of software packages for network analysis that implement recent advances in the statistical modeling of networks. The analytic framework is based on Exponential family Random Graph Models (ergm). statnet provides a comprehensive framework for ergm-based network modeling, including tools for model estimation, model evaluation, model-based network simulation, and network visualization. This broad functionality is powered by a central Markov chain Monte Carlo (MCMC) algorithm. (Requires R)

Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.

GraphChi is a spin-off of the GraphLab ( ) -project from the Carnegie Mellon University. It is based on research by Aapo Kyrola ( and his advisors.

GraphChi can run very large graph computations on just a single machine, by using a novel algorithm for processing the graph from disk (SSD or hard drive). Programs for GraphChi are written in the vertex-centric model, proposed by GraphLab and Google’s Pregel. GraphChi runs vertex-centric programs asynchronously (i.e changes written to edges are immediately visible to subsequent computation), and in parallel. GraphChi also supports streaming graph updates and removal of edges from the graph. Section ‘Performance’ contains some examples of applications implemented for GraphChi and their running times on GraphChi.

The promise of GraphChi is to bring web-scale graph computation, such as analysis of social networks, available to anyone with a modern laptop. It saves you from the hassle and costs of working with a distributed cluster or cloud services. We find it much easier to debug applications on a single computer than trying to understand how a distributed algorithm is executed.

In some cases GraphChi can solve bigger problems in reasonable time than many other available distributed frameworks. GraphChi also runs efficiently on servers with plenty of memory, and can use multiple disks in parallel by striping the data.

Web Based Stuff:

Play amateur Gestapo from the comfort of your living room:

Search Professionals by Name, Company or Title, painfully verbose compared to the above 2 tools

Broad list of search engines


A tool that uses Palantir Government:

connected with the following datasets:
and some misc. others

Database Listings

Analytic Methods:


Morphological Analysis – A general method for non-quantified modeling

Modeling Complex Socio-Technical Systems using Morphological Analysis

CIA Tradecraft Manual

Top 5 Intelligence Analysis Methods: Analysis Of Competing Hypotheses
(the author scores a 4.4 of 5 on , 2.4 on the easiness scale)
Many new analysts find that getting started is the hardest part of their job. Stating the objective, from the consumer’s standpoint, is an excellent starting point. If the analyst cannot define the consumer and his needs, how is it possible to provide analysis that complements what the consumer already knows.

“Ambassador Robert D. Blackwill … seized the attention of the class of some 30 [intelligence community managers] by asserting that as a policy official he never read … analytic papers. Why? “Because they were nonadhesive.” As Blackwill explained, they were written by people who did not know what he was trying to do and, so, could not help him get it done:
“When I was working at State on European affairs, for example, on certain issues I was the Secretary of State. DI analysts did not know that–that I was one of a handful of key decision makers on some very important matters….”

More charitably, he now characterizes his early periods of service at the NSC Staff and in State Department bureaus as ones of “mutual ignorance”

“DI analysts did not have the foggiest notion of what I did; and I did not have a clue as to what they could or should do.”[6]
Blackwill explained how he used his time efficiently, which rarely involved reading general CIA reports. “I read a lot. Much of it was press. You have to know how issues are coming across politically to get your job done. Also, cables from overseas for preparing agendas for meetings and sending and receiving messages from my counterparts in foreign governments. Countless versions of policy drafts from those competing for the President’s blessing. And dozens of phone calls. Many are a waste of time but have to be answered, again, for policy and political reasons.

“One more minute, please, on what I did not find useful. This is important. My job description called for me to help prepare the President for making policy decisions, including at meetings with foreign counterparts and other officials…. Do you think that after I have spent long weeks shaping the agenda, I have to be told a day or two before the German foreign minister visits Washington why he is coming?”

Interview With Jacob Appelbaum, Member of Tor and Wikileaks

If you’re wondering why they have a microscope embedded so deeply in his ass, he used to be a spokesperson for Wikileaks and he’s also a member of the Cult of the Dead Cow. Hacktivist’s have a six-degrees of Kevin Bacon connection to Wikileaks, it’s likely that not all of the material they receive was purposefully leaked. After credit card companies and banks cut ties with Wikileaks, they were introduced to an extended DDoS attack. As he describe in the interview, looking at metadata and relationships between people, even when using open source information, has created reliatble simulations of outcomttes.

Some of it is as safe as we think it can be, and some of it is not safe at all. The number one rule of “signals intelligence” is to look for plain text, or signaling information—who is talking to whom. For instance, you and I have been emailing, and that information, that metadata, isn’t encrypted, even if the contents of our messages are. This “social graph” information is worth more than the content. So, if you use SSL-encryption to talk to the OWS server for example, great, they don’t know what you’re saying. Maybe. Let’s assume the crypto is perfect. They see that you’re in a discussion on the site, they see that Bob is in a discussion, and they see that Emma is in a discussion. So what happens? They see an archive of the website, maybe they see that there were messages posted, and they see that the timing of the messages correlates to the time you were all browsing there. They don’t need to know to break a crypto to know what was said and who said it.

Traffic analysis. It’s as if they are sitting outside your house, watching you come and go, as well as the house of every activist you deal with. Except they’re doing it electronically. They watch you, they take notes, they infer information by the metadata of your life, which implies what it is that you’re doing. They can use it to figure out a cell of people, or a group of people, or whatever they call it in their parlance where activists become terrorists. And it’s through identification that they move into specific targeting, which is why it’s so important to keep this information safe first.

For example, they see that we’re meeting. They know that I have really good operational security. I have no phone. I have no computer. It would be very hard to track me here unless they had me physically followed. But they can still get to me by way of you. They just have to own your phone, or steal your recorder on the way out. The key thing is that good operational security has to be integrated into all of our lives so that observation of what we’re doing is much harder. Of course it’s not perfect. They can still target us, for instance, by sending us an exploit in our email, or a link in a web browser that compromises each of our computers. But if they have to exploit us directly, that changes things a lot. For one, the NYPD is not going to be writing exploits. They might buy software to break into your computer, but if they make a mistake, we can catch them. But it’s impossible to catch them if they’re in a building somewhere reading our text messages as they flow by, as they go through the switching center, as they write them down. We want to raise the bar so much that they have to attack us directly, and then in theory the law protects us to some extent.

But iPhones, for instance, don’t have a removable battery; they power off via the power button. So if I wrote a backdoor for the iPhone, it would play an animation that looked just like a black screen. And then when you pressed the button to turn it back on it would pretend to boot. Just play two videos. Link

CIA Funded Method For Determining Political Instability

Taken from open source data.

The US Government-sponsored Political Instability Task Force presented many of its Phase V findings during a panel at the 2005 annual meeting of the American Political Science Association in Washington, DC, September 3, 2005. Copies of the three papers presented at the meeting are posted here in PDF format.

The PITF is funded by the Central Intelligence Agency. The PITF website is hosted by hosted by the Center for Global Policy at George Mason University and is provided as a public service. The views expressed herein are those of the Task Force and its individual members, and do not represent the views of the University or the US Government. Link

…First is trade openness (the total value of imports plus exports divided by GDP).
Countries with lower trade openness (at the 25th percentile in the global distribution) had roughly
two to three times higher odds of near-term instability than countries with higher openness to
trade (those at the 75th percentile). State-led discrimination reappears, but with a larger impact.
The odds ratio between states with and without major economic or political discrimination
ranges from three to forty across the three control sets. The large range suggests the presence of
outliers in control set B2, but the variable remains statistically significant across all three control

Colonial heritage makes a notable difference in stability, with countries that were not
formerly French colonies having odds of instability roughly four to thirteen times greater than former French possessions.

This most likely reflects the fact that France has been far more involved than other former colonial powers in
maintaining economic and political order in its prior domains, including supporting the West African Franc,
providing generous support to post-colonial rulers, and even intervening militarily to maintain unpopular rulers and head off rebellions.

We tested this argument with a categorical
version of a variable that counts a chief executive’s cumulative years in office and found that
new leaders (less than five years in office) and “entrenched” leaders (those more than fourteen
years in office) indeed faced higher odds of instability than their peers who had been in office from 5-14 years. The odds of near-term instability for short-term leaders were two to fifteen times higher, and those for entrenched leaders were six to twelve times higher.

Finally, we did find one effect of group composition on instability. Countries that had a
dominant religious majority (over two-thirds of the population identified with the main religious
group) were more likely to experience instability than countries in which the population was
more evenly divided among different religious groups. Countries with a dominant religious
majority faced relative odds of instability five to twelve times greater than those that were more
evenly divided

All of that said, regime type once again showed the strongest effects. With fewer cases
and thus smaller samples, we did not find significant differences among all five regime types;
instead, it was simply the case that full autocracies were most stable, partial democracies with
factionalism were the most unstable, and all other regimes fell in the same middling range of
instability. In particular, these other regimes had odds of instability that were six to nine times
higher than those of full autocracies. Here, however, the impact of partial democracies with
factionalism shoots right off the scale because in our data every African country that mixed
partial democracy with factionalism suffered instability”

Predictive Political Simulation Model – Senturion – Using Algorithms and Equations On Iraq, Palestine

Used to predict compromises and coalitions in political situations by modeling stakeholders. The report applies the model to Operation Iraqi Freedom, the Iraqi Elections in Jan 2005, and the Palestinian Leadership after Yassir Arafat’s death. All data fed into the simulation was open source.

h/t Justin Boland

US Expanding Intelligence Operations In Africa

Top Secret Government Files – One Of The Most Complex Information Management Problems Ever

“There has been so much growth since 9/11 that getting your arms around that – not just for the CIA, for the secretary of defense – is a challenge,” Defense Secretary Robert M. Gates said in an interview with The Post last week.

In the Department of Defense, where more than two-thirds of the intelligence programs reside, only a handful of senior officials – called Super Users – have the ability to even know about all the department’s activities. But as two of the Super Users indicated in interviews, there is simply no way they can keep up with the nation’s most sensitive work.

“I’m not going to live long enough to be briefed on everything” was how one Super User put it. The other recounted that for his initial briefing, he was escorted into a tiny, dark room, seated at a small table and told he couldn’t take notes. Program after program began flashing on a screen, he said, until he yelled ”Stop!” in frustration.

“I wasn’t remembering any of it,” he said.

Underscoring the seriousness of these issues are the conclusions of retired Army Lt. Gen. John R. Vines, who was asked last year to review the method for tracking the Defense Department’s most sensitive programs. Vines, who once commanded 145,000 troops in Iraq and is familiar with complex problems, was stunned by what he discovered.

I’m not aware of any agency with the authority, responsibility or a process in place to coordinate all these interagency and commercial activities,” he said in an interview. “The complexity of this system defies description.Link h/t Justin Boland

US Army Open Source Intelligence Link Directory

LittleSis – Snooping on the people in power

Lists, links and known associates – The people’s Gestapo

Research Beyond Google – Resources


%d bloggers like this: