Categories
Intelligence

Open Source Intelligence Analysis – Demographics

Getting good demographics can help you to quickly understand the context of messages that circulate through different websites. The easiest method for large websites is to look them up on http://www.quantcast.com/

For instance, we an obvious pattern that hispanics and blacks tend to visit conspiracy websites much more often than whites. We also see that many of the sites tend to have older visitors with higher incomes. The exception are data driven websites like Wikileaks, which tend to be lower income but highly educated viewers who are mostly white or asian.


If you can find facebook groups for websites like this, you can cross-check some of the basic information by looking at user photos, names, and ages (keep in mind that facebook users tend to be younger than average):

To get a quick introduction to the character of a website, simply do an imagesearch of it on google, e.g.: site:http://forum.prisonplanet.com/index.php

Search through the websites looking for mentions of states and/or cities using google, e.g.: texas site:infowars.com (don’t add a space between the search command and the website, so use site:websitehere.com). Look for introduction threads or user profiles that list locations. Twitter accounts can also assist in this process.

With this information you can cross-correlate the cities members live in to get an idea of their general make-up, and how it compares to other demographic sources.

If there are a lot of unique images on the website, use google’s image search function to look around for other websites with the same images, which will expand your understanding of the psychographics of the users by finding similar sites and images.

For more google search ideas, look at “How to solve impossible problems: Daniel Russell’s awesome Google search technique”:

http://www.johntedesco.net/blog/2012/06/21/how-to-solve-impossible-problems-daniel-russells-awesome-google-search-techniques/

If you want to map out keywords and connections, use a graph similar to this:

http://www.touchgraph.com/seo

A basic search gives us something like this:

Which shows us that we can also harvest data from youtube and amazon, as well as the smaller linked websites.

Now we have the basic demographics, we look for commonalities. Search through abstracts of psychology journals using pubmed.gov or google scholar, looking for keywords related to conspiracy theories, demographic information and psychology journals.

We end up with some curious things like this:

http://heb.sagepub.com/content/32/4/474.short

This article examines the endorsement of conspiracy beliefs about birth control (e.g., the belief that birth control is a form of Black genocide) and their association with contraceptive attitudes and behavior among African Americans. The authors conducted a telephone survey with a random sample of 500 African Americans (aged 15-44). Many respondents endorsed birth control conspiracy beliefs, including conspiracy beliefs about Black genocide and the safety of contraceptive methods. Stronger conspiracy beliefs predicted more negative attitudes toward contraceptives. In addition, men with stronger contraceptive safety conspiracy beliefs were less likely to be currently using any birth control. Among current birth control users, women with stronger contraceptive safety conspiracy beliefs were less likely to be using contraceptive methods that must be obtained from a health care provider. Results suggest that conspiracy beliefs are a barrier to pregnancy prevention. Findings point to the need for addressing conspiracy beliefs in public health practice.

And:

http://onlinelibrary.wiley.com/doi/10.1111/0162-895X.00160/abstract

This study used canonical correlation to examine the relationship of 11 individual difference variables to two measures of beliefs in conspiracies. Undergraduates were administered a questionnaire that included these two measures (beliefs in specific conspiracies and attitudes toward the existence of conspiracies) and scales assessing the 11 variables. High levels of anomie, authoritarianism, and powerlessness, along with a low level of self-esteem, were related to beliefs in specific conspiracies, whereas high levels of external locus of control and hostility, along with a low level of trust, were related to attitudes toward the existence of conspiracies in general. These findings support the idea that beliefs in conspiracies are related to feelings of alienation, powerlessness, hostility, and being disadvantaged. There was no support for the idea that people believe in conspiracies because they provide simplified explanations of complex events.

And:

http://www.jstor.org/discover/10.2307/3791630?uid=2&uid=4&sid=21101379127527

From this information we can break them into traditional psychographics using stock models:

Now you can create a database that can be used for advanced analytic operations, using R, excel, SAS or a programming language like Python. R tends to be more effective for smaller sets less than 2GB because of it’s memory usage, but it has nearly all statistical functions anyone has thought to use which makes it very useful for experimental projects. SAS is commercial software that is mainly effective for large data sets. Excel is a decent entry level solution. Python is not quite as flexible as R yet, but it’s modules are improving and it can be interfaced with R.

Categories
Intelligence technology

Open Source Intelligence Analysis – We NSA Now

Working Thoughts:

1. Wikileaks can act as a secondary database. What we’ve seen so far makes it clear that most of the classified material is common knowledge but it could be useful.
2. Robert Steele is right that the humanitarian goodwill approach is superior. We’ve spent a lot of money in Afghanistan, but most of it was spent in unpopulated areas that were safe, the people who needed it didn’t get it. Lots of corruption. A tighter approach could be made.
3. Fiverr and penpal sites can also be useful for general cultural understanding or simple local tasks, e.g. : http://fiverr.com/worryfustion/help-you-learn-about-the-ethnic-groups-in-vietnam

http://fiverr.com/vann97/answer-10-questions-in-great-details-about-vietnam
4. Nearly all current prediction markets operate as zero-sum or negative-sum markets.


More OSINT Links:

“Dradis is a self-contained web application that provides a centralised repository of information to keep track of what has been done so far, and what is still ahead.”

http://dradisframework.org/

Links for OSINT (Open Source Intelligence) by Randolph Hock
http://www.onstrat.com/osint/

City Data:
http://www.city-data.com/

Public Records:
http://publicrecords.onlinesearches.com/

Name/Location Search Engine:
https://pipl.com/

“creepy is an application that allows you to gather geolocation related information about users from social networking platforms and image hosting services. The information is presented in a map inside the application where all the retrieved data is shown accompanied with relevant information (i.e. what was posted from that specific location) to provide context to the presentation.”
http://ilektrojohn.github.com/creepy/

Here is a recent example that uses the Palantir platform and OSINT:

Less than four months ago, the Southern portion of Sudan seceded and formed South Sudan, only the 5th country to be created this century. In this session, we will demonstrate how Palantir can draw from a plethora of Open Source Intelligence (OSINT) data sources (including academic research, blogs, news media, NGO reports and United Nations studies) to rapidly construct an understanding of the conflict underlying this somewhat anomalous 21st Century event. Using a suite of Palantir Helpers developed for OSINT analysis, the video performs relational, temporal, statistical, geospatial, and social network analysis of over a dozen open sources of data.

See also:

Detecting Emergent Conflicts through Web Mining and Visualization

https://www.recordedfuture.com/assets/Detecting-Emergent-Conflicts-through-Web-Mining-and-Visualization.pdf

&

Maltego

http://www.paterva.com/web6/

Categories
Intelligence technology

Open Source Intelligence Analysis – Palantir Does Indeed Kick Ass

Messing around with the Palantir Government suite right now. You can get an account and mess around with it here:

https://analyzethe.us/

You have the ability to import/export data, filter access, set up collaborative teams and access to the open archives of the US Gov and some non profits. There are two tiers of users, novice users and power users:

Workspace Operations
Restrictions for Novice Users
Importing data

Novice users can only import data that is correctly mapped to the deployment ontology. Power users are exempt from this restriction.

The maximum number of rows in structured data sources that a Novice user can imported at one time is restricted by the NOVICE_IMPORT_STRUCTURED_MAX_ROWS system property. The default value for this property is 1000.

The maximum size of unstructured data sources that can be imported by a Novice user at one time is restricted by the NOVICE_IMPORT_UNSTRUCTURED_MAX_SIZE_IN_MB system property. The default value for this property is 5 megabytes.
Tagging text

The maximum number of tags that a Novice user can create using the Find and Tag helper is restricted by the system property NOVICE_FIND_AND_TAG_MAX_TAGS. The default setting for this property is 50.

Novice users cannot access the Tag All Occurrences in Tab option in the Browser’s Tag As dialog.
SearchAround search templates

Novice users cannot import SearchAround Templates from XML files.

Novice users cannot publish SearchAround templates for use by the entire deployment, and cannot edit published templates.
All other SearchAround features remain available.
Resolving Nexus Peering data conflicts
The Pending Changes application is available only in the Palantir Enterprise Platform, and is only accessible to Workspace users who belong to the Nexus Peering Data Managers user group.
Nexus Peering Data Managers use the Pending Changes application to check for, analyze, and resolve data conflicts that are not automatically resolved when a local nexus is synchronized with a peered nexus.
Deleting objects

Novice users cannot delete published objects.

Novice users cannot delete objects created or changed by other users.
Resolving objects

The maximum number of objects that Novice users can resolve together at one time is restricted by the NOVICE_RESOLVE_MAX_OBJECTS system property. This restriction does not apply to objects resolved by using existing object resolution suites in the Object Resolution Wizard or during data import.

Novice users may use the Object Resolution Wizard only when using existing object resolution suites. Novice users cannot perform Manual Object Resolution, and cannot record new resolution criteria as an Object Resolution Suite.
To learn more, see Resolving and Unresolving Objects in Workspace: Beyond the Basics.
Map application restrictions
All map metadata tools in the Layers helper are restricted.
Novice users cannot access features that allow sorting of layers by metadata, coloring by metadata, or the creation of new metadata. All other Layer helper functions remain available.

In case you didn’t get what I just said, you have access the same tools the FBI and CIA use, except some minor limitations and no access to classified documents. If you have access to Wolfram Alpha/Mathematica and can google for history on your topic of interest then most of the classified files will become redundant.

What about data mining on a budget?

Consider relying on a GPU(s). A CPU is designed to be multitasker that can quickly switch between actions, whereas a Graphical Processing Unit(GPU) is designed to do the same calculations repetitively while giving large increases in performance. The stacks in the listed papers, while giving exponentially higher speeds, did not use modern designs or graphics cards, which hindered them from running even faster.

http://www.azintablog.com/2010/10/16/gpu-large-scale-data-mining/

The GPU (Graphics Prossessing Unit) is changing the face of large scale data mining by significantly speeding up the processing of data mining algorithms. For example, using the K-Means clustering algorithm, the GPU-accelerated version was found to be 200x-400x faster than the popular benchmark program MimeBench running on a single core CPU, and 6x-12x faster than a highly optimised CPU-only version running on an 8 core CPU workstation.

These GPU-accelerated performance results also hold for large data sets. For example in 2009 data set with 1 billion 2-dimensional data points and 1,000 clusters, the GPU-accelerated K-Means algorithm took 26 minutes (using a GTX 280 GPU with 240 cores) whilst the CPU-only version running on a single-core CPU workstation, using MimeBench, took close to 6 days (see research paper “Clustering Billions of Data Points using GPUs” by Ren Wu, and Bin Zhang, HP Laboratories). Substantial additional speed-ups are expected were the tests conducted today on the latest Fermi GPUs with 480 cores and 1 TFLOPS performance.

Over the last two years hundreds of research papers have been published, all confirming the substantial improvement in data mining that the GPU delivers. I will identify a further 7 data mining algorithms where substantial GPU acceleration have been achieved in the hope that it will stimulate your interest to start using GPUs to accelerate your data mining projects:

Hidden Markov Models (HMM) have many data mining applications such as financial economics, computational biology, addressing the challenges of financial time series modelling (non-stationary and non-linearity), analysing network intrusion logs, etc. Using parallel HMM algorithms designed for the GPU, researchers (see cuHMM: a CUDA Implementation of Hidden Markov Model Training and Classification by Chaun Lin, May 2009) were able to achieve performance speedup of up to 800x on a GPU compared with the time taken on a single-core CPU workstation.

Sorting is a very important part of many data mining application. Last month Duane Merrill and Andrew Grinshaw (from University of Virginia) reported achieving a very fast implementation of the radix sorting method and was able to exceed 1G keys/sec average sort rate on an the GTX480 (NVidia Fermi GPU). Seehttp://goo.gl/wpra

Density-based Clustering is an important paradigm in clustering since typically it is noise and outlier robust and very good at searching for clusters of arbitrary shape in metric and vector spaces. Tests have shown that the GPU speed-up ranged from 3.5x for 30k points to almost 15x for 2 million data points. A guaranteed GPU speedup factor of at least 10x was obtained on data sets consisting of more than 250k points. (See “Density-based Clustering using Graphics Processors” by Christian Bohm et al).

Similarity Join is an important building block for similarity search and data mining algorithms. Researchers using a special algorithm called Index-supported similarity join for the GPU to outperform the CPU by a factor of 15.9x on 180 Mbytes of data (See “Index-supported Similarity Join on Graphics Processors” by Christian Bohm et al).

Bayesian Mixture Models has applications in many areas and of particular interest is the Bayesian analysis of structured massive multivariate mixtures with large data sets. Recent research work (see “Understanding the GPU Programming for Statistical Computation: Studies in Massively Massive Mixtures” by Marc Suchard et al.) has demonstrated that an old generation GPU (GeForce GTX285 with 240 cores) was able to achieve a 120x speed-up over a quad-core CPU version.

Support Vector Machines (SVM) has many diverse data mining uses including classification and regression analysis. Training SVM and using them for classification remains computationally intensive. The GPU version of a SVM algorithm was found to be 43x-104x faster than SVM CPU version for building classification models and 112x-212x faster over SVM CPU version for building regression models. See “GPU Accelerated Support Vector Machines for Mining High-Throughput Screening Data” by Quan Liao, Jibo Wang, et al.

Kernel Machines. Algorithms based on kernel methods play a central part in data mining including modern machine learning and non-parametric statistics. Central to these algorithms are a number of linear operations on matrices of kernel functions which take as arguments the training and testing data. Recent work (See “GPUML: Graphical processes for speeding up kernel machines” by Balaji Srinivasan et al. 2009) involves transforming these Kernel Machines into parallel kernel algorithms on a GPU and the following are two example where considerable speed-ups were achieved; (1) To estimate the densities of 10,000 data points on 10,000 samples. The CPU implementation took 16 seconds whilst the GPU implementation took 13ms which is a significant speed-up will in excess of 1,230x; (2) In a Gaussian process regression, for regression 8 dimensional data the GPU took 2 seconds to make predictions whist the CPU version took hours to make the same prediction which again is a significant speed-up over the CPU version.

If you want to use the GPUs but you do not want to get your hands “dirty” writing CUDA C/C++ code (or other languages bindings such as Python, Java, .NET, Fortran, Perl, or Lau) then consider using MATLAB Parallel Computing Toolbox. This is a powerful solution for those who know MATLAB. Alternatively R now has GPU plugins. A subsequent post will cover using MATLAB and R for GPU accelerated data mining.

These are space whales flying through the sun:

Categories
Intelligence technology

Open Source Intelligence Analysis – Software, Methods, Resources

http://www.kurzweilai.net/intelligence-agencies-turn-to-crowdsourcing

Research firm Applied Research Associates has just launched a website, Global Crowd Intelligence, that invites the public to sign up and try their hand at intelligence forecasting, BBC Future reports.

The website is part of an effort called Aggregative Contingent Estimation, sponsored by the Intelligence Advanced Research Projects Activity (Iarpa), to understand the potential benefits of crowdsourcing for predicting future events by making forecasting more like a game of spy versus spy.

The new website rewards players who successfully forecast future events by giving them privileged access to certain “missions,” and also allowing them to collect reputation points, which can then be used for online bragging rights. When contributors enter the new site, they start off as junior analysts, but eventually progress to higher levels, allowing them to work on privileged missions.

The idea of crowdsourcing geopolitical forecasting is increasing in popularity, and not just for spies.  Wikistrat, a private company touted as “the world’s first massively multiplayer online consultancy,” was founded in 2002, and is using crowdsourcing to generate scenarios about future geopolitical events. It recently released a report based on a crowdsourced simulation looking at China’s future naval powers.

Warnaar says that Wikistrat’s approach appears to rely on developing “what-if scenarios,” rather than attaching a probability to a specific event happening, which is the goal of the Iarpa project.

Paul Fernhout put together a good open letter awhile back on the need for this, it seems IARPA has put some effort forward for this purpose:

Paul Fernhout: Open Letter to the Intelligence Advanced Programs Research Agency (IARPA)

A first step towards that could be for IARPA to support better free software tools for “crowdsourced” public intelligence work involving using a social semantic desktop for sensemaking about open source data and building related open public action plans from that data to make local communities healthier, happier, more intrinsically secure, and also more mutually secure. Secure, healthy, prosperous, and happy local (and virtual) communities then can form together a secure, healthy, prosperous, and happy nation and planet in a non-ironic way. Details on that idea are publicly posted by me here in the form of a Proposal Abstract to the IARPA Incisive Analysis solicitation: “Social Semantic Desktop for Sensemaking on Threats and Opportunities”

So what kind of tools can an amateur use for making sense of data?

Data Mining and ACH

Here is a basic implementation of ACH:

http://competinghypotheses.org/

Analysis of Competing Hypotheses (ACH) is a simple model for how to think about a complex problem when the available information is incomplete or ambiguous, as typically happens in intelligence analysis. The software downloadable here takes an analyst through a process for making a well-reasoned, analytical judgment. It is particularly useful for issues that require careful weighing of alternative explanations of what has happened, is happening, or is likely to happen in the future. It helps the analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult. ACH is grounded in basic insights from cognitive psychology, decision analysis, and the scientific method. It helps analysts protect themselves from avoidable error, and improves their chances of making a correct judgment.
http://www2.parc.com/istl/projects/ach/ach.html

RapidMiner – About 6% of data miners use it – Can use R as an extension with a GUI
http://rapid-i.com/content/view/281/225/

R – 46% of data miners use this – in some ways better than commercial software – I’m not sure what the limit of this software is, incredibly powerful
http://www.r-project.org/

Network Mapping

Multiple tools – Finding sets of key players in a network – Cultural domain analysis – Network visualization – Software for analyzing ego-network data – Software package for visualizing social networks
http://www.analytictech.com/products.htm

NodeXL is a free, open-source template for Microsoft® Excel® 2007 and 2010 that makes it easy to explore network graphs. With NodeXL, you can enter a network edge list in a worksheet, click a button and see your graph, all in the familiar environment of the Excel window.
http://nodexl.codeplex.com/

Stanford Network Analysis Platform (SNAP) is a general purpose, high performance system for analysis and manipulation of large networks. Graphs consists of nodes and directed/undirected/multiple edges between the graph nodes. Networks are graphs with data on nodes and/or edges of the network.
http://snap.stanford.edu/snap/index.html

*ORA is a dynamic meta-network assessment and analysis tool developed by CASOS at Carnegie Mellon. It contains hundreds of social network, dynamic network metrics, trail metrics, procedures for grouping nodes, identifying local patterns, comparing and contrasting networks, groups, and individuals from a dynamic meta-network perspective. *ORA has been used to examine how networks change through space and time, contains procedures for moving back and forth between trail data (e.g. who was where when) and network data (who is connected to whom, who is connected to where …), and has a variety of geo-spatial network metrics, and change detection techniques. *ORA can handle multi-mode, multi-plex, multi-level networks. It can identify key players, groups and vulnerabilities, model network changes over time, and perform COA analysis. It has been tested with large networks (106 nodes per 5 entity classes).Distance based, algorithmic, and statistical procedures for comparing and contrasting networks are part of this toolkit.
http://www.casos.cs.cmu.edu/projects/ora/

NetworkX is a Python language software package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
http://networkx.lanl.gov/

Social Networks Visualizer (SocNetV) is a flexible and user-friendly tool for the analysis and visualization of Social Networks. It lets you construct networks (mathematical graphs) with a few clicks on a virtual canvas or load networks of various formats (GraphViz, GraphML, Adjacency, Pajek, UCINET, etc) and modify them to suit your needs. SocNetV also offers a built-in web crawler, allowing you to automatically create networks from all links found in a given initial URL.
http://socnetv.sourceforge.net/

SUBDUE is a graph-based knowledge discovery system that finds structural, relational patterns in data representing entities and relationships. SUBDUE represents data using a labeled, directed graph in which entities are represented by labeled vertices or subgraphs, and relationships are represented by labeled edges between the entities. SUBDUE uses the minimum description length (MDL) principle to identify patterns that minimize the number of bits needed to describe the input graph after being compressed by the pattern. SUBDUE can perform several learning tasks, including unsupervised learning, supervised learning, clustering and graph grammar learning. SUBDUE has been successfully applied in a number of areas, including bioinformatics, web structure mining, counter-terrorism, social network analysis, aviation and geology.
http://ailab.wsu.edu/subdue/

A range of tools for social network analysis, including node and graph-level indices, structural distance and covariance methods, structural equivalence detection, p* modeling, random graph generation, and 2D/3D network visualization.(R based)
http://cran.us.r-project.org/web/packag … index.html

statnet is a suite of software packages for network analysis that implement recent advances in the statistical modeling of networks. The analytic framework is based on Exponential family Random Graph Models (ergm). statnet provides a comprehensive framework for ergm-based network modeling, including tools for model estimation, model evaluation, model-based network simulation, and network visualization. This broad functionality is powered by a central Markov chain Monte Carlo (MCMC) algorithm. (Requires R)
http://statnetproject.org/

Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.
http://tulip.labri.fr/TulipDrupal/

GraphChi is a spin-off of the GraphLab ( http://www.graphlab.org ) -project from the Carnegie Mellon University. It is based on research by Aapo Kyrola (http://www.cs.cmu.edu/~akyrola/) and his advisors.

GraphChi can run very large graph computations on just a single machine, by using a novel algorithm for processing the graph from disk (SSD or hard drive). Programs for GraphChi are written in the vertex-centric model, proposed by GraphLab and Google’s Pregel. GraphChi runs vertex-centric programs asynchronously (i.e changes written to edges are immediately visible to subsequent computation), and in parallel. GraphChi also supports streaming graph updates and removal of edges from the graph. Section ‘Performance’ contains some examples of applications implemented for GraphChi and their running times on GraphChi.

The promise of GraphChi is to bring web-scale graph computation, such as analysis of social networks, available to anyone with a modern laptop. It saves you from the hassle and costs of working with a distributed cluster or cloud services. We find it much easier to debug applications on a single computer than trying to understand how a distributed algorithm is executed.

In some cases GraphChi can solve bigger problems in reasonable time than many other available distributed frameworks. GraphChi also runs efficiently on servers with plenty of memory, and can use multiple disks in parallel by striping the data.
https://code.google.com/p/graphchi/

Web Based Stuff:

Play amateur Gestapo from the comfort of your living room:
http://littlesis.org/
http://theyrule.net/

Search Professionals by Name, Company or Title, painfully verbose compared to the above 2 tools
http://www.marketvisual.com/

Broad list of search engines

http://en.wikipedia.org/wiki/List_of_search_engines

&

http://www.wired.com/business/2009/06/coolsearchengines/

A tool that uses Palantir Government:
https://analyzethe.us

connected with the following datasets:
http://www.usaspending.gov
http://www.data.gov/
http://www.opensecrets.org/
https://www.epls.gov/
and some misc. others

Database Listings

http://www.forecastingprinciples.com/index.php?option=com_content&view=article&id=8&Itemid=18

http://www.datawrangling.com/some-datasets-available-on-the-web

http://datamarket.com/

Analytic Methods:

THIS BLOG IS PART OF CLASS PROJECT TO EXPLORE VARIOUS ANALYTIC TECHNIQUES USED BY MODERN INTELLIGENCE ANALYSTS (DELICIOUS ALL CAPS)
http://advat.blogspot.co.uk/

Morphological Analysis – A general method for non-quantified modeling
http://www.swemorph.com/pdf/gma.pdf

Modeling Complex Socio-Technical Systems using Morphological Analysis
http://www.swemorph.com/pdf/it-webart.pdf

CIA Tradecraft Manual

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/Tradecraft%20Primer-apr09.pdf

Top 5 Intelligence Analysis Methods: Analysis Of Competing Hypotheses
http://sourcesandmethods.blogspot.com/2008/12/top-5-intelligence-analysis-methods_19.html
(the author scores a 4.4 of 5 on http://www.ratemyprofessors.com/ShowRatings.jsp?tid=545372 , 2.4 on the easiness scale)

http://en.wikipedia.org/wiki/Intelligence_analysis#Analytic_tradecraft
Many new analysts find that getting started is the hardest part of their job. Stating the objective, from the consumer’s standpoint, is an excellent starting point. If the analyst cannot define the consumer and his needs, how is it possible to provide analysis that complements what the consumer already knows.

“Ambassador Robert D. Blackwill … seized the attention of the class of some 30 [intelligence community managers] by asserting that as a policy official he never read … analytic papers. Why? “Because they were nonadhesive.” As Blackwill explained, they were written by people who did not know what he was trying to do and, so, could not help him get it done:
“When I was working at State on European affairs, for example, on certain issues I was the Secretary of State. DI analysts did not know that–that I was one of a handful of key decision makers on some very important matters….”

More charitably, he now characterizes his early periods of service at the NSC Staff and in State Department bureaus as ones of “mutual ignorance”

“DI analysts did not have the foggiest notion of what I did; and I did not have a clue as to what they could or should do.”[6]
Blackwill explained how he used his time efficiently, which rarely involved reading general CIA reports. “I read a lot. Much of it was press. You have to know how issues are coming across politically to get your job done. Also, cables from overseas for preparing agendas for meetings and sending and receiving messages from my counterparts in foreign governments. Countless versions of policy drafts from those competing for the President’s blessing. And dozens of phone calls. Many are a waste of time but have to be answered, again, for policy and political reasons.

“One more minute, please, on what I did not find useful. This is important. My job description called for me to help prepare the President for making policy decisions, including at meetings with foreign counterparts and other officials…. Do you think that after I have spent long weeks shaping the agenda, I have to be told a day or two before the German foreign minister visits Washington why he is coming?”

Categories
Intelligence International Affairs

Find Out Your States Corruption Ranking (US)

http://www.iwatchnews.org/2012/03/19/8423/grading-nation-how-accountable-your-state

Categories
Intelligence Problem Solving

US Army Open Source Intelligence Link Directory

http://humanterrainsystem.army.mil/Newsletter/20101220_osint_link_directory.pdf

Categories
Intelligence

LittleSis – Snooping on the people in power

Lists, links and known associates – The people’s Gestapo

http://littlesis.org/

Categories
Intelligence Uncategorized

Research Beyond Google – Resources

http://oedb.org/library/college-basics/research-beyond-google