FORWARD BASE B

"Pay my troops no mind; they're just on a fact-finding mission."

Tag Archives: open source intelligence

What Gave Away Bin Laden’s Location

OB-NT344_iosama_H_20110503053717

As you would expect, Osama Bin Laden kept messages to friends and family reasonably secure. However transmissions between his bodyguards and their families were not subjected to the same level of scrutiny. The fact that he stayed in such a high profile house was unusual however. Anyone with the slightest bit of curiosity would wonder about the purpose of a compound with 12 foot concrete walls and barbed wire. It is to be expected that he would have the cooperation of local military and intelligence elites, rebels have a very difficult time operating unless they stack the deck in their favor by allying with neighboring forces. Their lack of technological sophistication is also pretty standard, many documents have been captured unecrypted from insurgents because they don’t understand that encryption is very difficult to impossible to break if done properly.

http://www.csmonitor.com/World/Asia-South-Central/2011/0502/Bin-Laden-bodyguard-s-satellite-phone-calls-helped-lead-US-forces-to-hiding-place

Satellite phone calls that Osama bin Laden’s bodyguard made from July to August last year are believed to have helped US forces hunt down the Al Qaeda leader in the Pakistani compound where he was killed early Monday, according to local Pakistani intelligence sources.

US intelligence agencies tracked the Kuwaiti bodyguard’s calls from the compound to Al Qaeda associates in the cities of Kohat and Charsada in Khyber Pakhtunkhwa Province, a narrative that was corroborated by several sources.

From Wiki:

http://en.wikipedia.org/wiki/Location_of_Osama_bin_Laden#Tracking

American intelligence officials discovered the whereabouts of Osama bin Laden by tracking one of his couriers. Information was collected from Guantánamo Bay detainees, who gave intelligence officers the courier’s pseudonym and said that he was a protégé of Khalid Sheikh Mohammed.[5] In 2007, U.S. officials discovered the courier’s real name and, in 2009, that he lived in Abbottābad, Pakistan.[6] Using satellite photos and intelligence reports, the CIA surmised the inhabitants of the mansion. In September, the CIA concluded that the compound was “custom built to hide someone of significance” and that bin Laden’s residence there was very likely.[7][8] Officials surmised that he was living there with his youngest wife.[8]

Built in 2005, the three-story[12] mansion was located in a compound about 4 km (2.5 mi.) northeast of the center of Abbottabad.[7] While the compound was assessed by US officials at a value of USD 1 million, local real-estate agents assess the property value at USD 250 thousand.[13] On a lot about eight times the size of nearby houses, it was surrounded by 12- to 18-foot (3.7-5.5 m)[8] concrete walls topped with barbed wire.[7] There were two security gates and the third-floor balcony had a seven-foot-high (2.1 m) privacy wall.[12] There was no Internet or telephone service coming into the compound. Its residents burned their trash, unlike their neighbors, who simply set it out for collection. The compound is located (34°10′09″N 73°14′33″E) and 1.3 km (0.8 mi.) southwest of the closest point of the sprawling Pakistan Military Academy.[14] President Obama met with his national security advisors on March 14, 2011, in the first of five security meetings over six weeks. On April 29, at 8:20 a.m., Obama convened with Thomas DonilonJohn O. Brennan, and other security advisers in the Diplomatic Room, where he authorized a raid of the Abbottābad compound. The government of Pakistan was not informed of this decision.[7]

AFRIDI_3.jpg.crop_display

US Intelligence services contacted a Pakistani physician through the US NGO Save The Children, to help them set up a fake vaccination program that would allow them to collect DNA to identify the people inside of the compound. This lead to him being arrested and sentenced to 33 years for Treason, supposedly for links to a local tribal terrorist organization:

To identify the occupants of the compound, the CIA worked with doctor Shakil Afridi to organize a fake vaccination program. Nurses gained entry to the residence to vaccinate the children and extract DNA,[9] which could be compared to a sample from his sister, who died in Boston in 2010.[10] It’s not clear if the DNA was ever obtained.[11]

http://en.wikipedia.org/wiki/Shakil_Afridi

Colleagues at Jamrud Hospital in Pakistan’s northwestern Khyber tribal were suspicious of Dr. Shakeel Afridi’s, the hospital’s chief surgeon, absences which he explained as “business” to attend to in Abbottabad. Dr Afridi was accused of having taken a half-dozen World Health Organization cooler boxes without authorization. The containers are for inoculation campaigns, but no immunization drives were underway in Abbottabad or the Khyber agency.[11][12]

Pakistani investigators said in a July 2012 report that Afridi met 25 times with “foreign secret agents, received instructions and provided sensitive information to them.”[13] According to Pakistani reports, Afridi told investigators that the charity Save the Children helped facilitate his meeting with U.S. intelligence agents although the charity denies the charge. The report alleges that Save the Children’s Pakistan director introduced Afridi to a western woman in Islamabad and that Afridi and the woman met regularly afterwards.

Open Source Intelligence Analysis – We NSA Now

Working Thoughts:

1. Wikileaks can act as a secondary database. What we’ve seen so far makes it clear that most of the classified material is common knowledge but it could be useful.
2. Robert Steele is right that the humanitarian goodwill approach is superior. We’ve spent a lot of money in Afghanistan, but most of it was spent in unpopulated areas that were safe, the people who needed it didn’t get it. Lots of corruption. A tighter approach could be made.
3. Fiverr and penpal sites can also be useful for general cultural understanding or simple local tasks, e.g. : http://fiverr.com/worryfustion/help-you-learn-about-the-ethnic-groups-in-vietnam

http://fiverr.com/vann97/answer-10-questions-in-great-details-about-vietnam
4. Nearly all current prediction markets operate as zero-sum or negative-sum markets.


More OSINT Links:

“Dradis is a self-contained web application that provides a centralised repository of information to keep track of what has been done so far, and what is still ahead.”

http://dradisframework.org/

Links for OSINT (Open Source Intelligence) by Randolph Hock
http://www.onstrat.com/osint/

City Data:
http://www.city-data.com/

Public Records:
http://publicrecords.onlinesearches.com/

Name/Location Search Engine:
https://pipl.com/

“creepy is an application that allows you to gather geolocation related information about users from social networking platforms and image hosting services. The information is presented in a map inside the application where all the retrieved data is shown accompanied with relevant information (i.e. what was posted from that specific location) to provide context to the presentation.”
http://ilektrojohn.github.com/creepy/

Here is a recent example that uses the Palantir platform and OSINT:

Less than four months ago, the Southern portion of Sudan seceded and formed South Sudan, only the 5th country to be created this century. In this session, we will demonstrate how Palantir can draw from a plethora of Open Source Intelligence (OSINT) data sources (including academic research, blogs, news media, NGO reports and United Nations studies) to rapidly construct an understanding of the conflict underlying this somewhat anomalous 21st Century event. Using a suite of Palantir Helpers developed for OSINT analysis, the video performs relational, temporal, statistical, geospatial, and social network analysis of over a dozen open sources of data.

See also:

Detecting Emergent Conflicts through Web Mining and Visualization

https://www.recordedfuture.com/assets/Detecting-Emergent-Conflicts-through-Web-Mining-and-Visualization.pdf

&

Maltego

http://www.paterva.com/web6/

Open Source Intelligence Analysis – Palantir Does Indeed Kick Ass

Messing around with the Palantir Government suite right now. You can get an account and mess around with it here:

https://analyzethe.us/

You have the ability to import/export data, filter access, set up collaborative teams and access to the open archives of the US Gov and some non profits. There are two tiers of users, novice users and power users:

Workspace Operations
Restrictions for Novice Users
Importing data

Novice users can only import data that is correctly mapped to the deployment ontology. Power users are exempt from this restriction.

The maximum number of rows in structured data sources that a Novice user can imported at one time is restricted by the NOVICE_IMPORT_STRUCTURED_MAX_ROWS system property. The default value for this property is 1000.

The maximum size of unstructured data sources that can be imported by a Novice user at one time is restricted by the NOVICE_IMPORT_UNSTRUCTURED_MAX_SIZE_IN_MB system property. The default value for this property is 5 megabytes.
Tagging text

The maximum number of tags that a Novice user can create using the Find and Tag helper is restricted by the system property NOVICE_FIND_AND_TAG_MAX_TAGS. The default setting for this property is 50.

Novice users cannot access the Tag All Occurrences in Tab option in the Browser’s Tag As dialog.
SearchAround search templates

Novice users cannot import SearchAround Templates from XML files.

Novice users cannot publish SearchAround templates for use by the entire deployment, and cannot edit published templates.
All other SearchAround features remain available.
Resolving Nexus Peering data conflicts
The Pending Changes application is available only in the Palantir Enterprise Platform, and is only accessible to Workspace users who belong to the Nexus Peering Data Managers user group.
Nexus Peering Data Managers use the Pending Changes application to check for, analyze, and resolve data conflicts that are not automatically resolved when a local nexus is synchronized with a peered nexus.
Deleting objects

Novice users cannot delete published objects.

Novice users cannot delete objects created or changed by other users.
Resolving objects

The maximum number of objects that Novice users can resolve together at one time is restricted by the NOVICE_RESOLVE_MAX_OBJECTS system property. This restriction does not apply to objects resolved by using existing object resolution suites in the Object Resolution Wizard or during data import.

Novice users may use the Object Resolution Wizard only when using existing object resolution suites. Novice users cannot perform Manual Object Resolution, and cannot record new resolution criteria as an Object Resolution Suite.
To learn more, see Resolving and Unresolving Objects in Workspace: Beyond the Basics.
Map application restrictions
All map metadata tools in the Layers helper are restricted.
Novice users cannot access features that allow sorting of layers by metadata, coloring by metadata, or the creation of new metadata. All other Layer helper functions remain available.

In case you didn’t get what I just said, you have access the same tools the FBI and CIA use, except some minor limitations and no access to classified documents. If you have access to Wolfram Alpha/Mathematica and can google for history on your topic of interest then most of the classified files will become redundant.

What about data mining on a budget?

Consider relying on a GPU(s). A CPU is designed to be multitasker that can quickly switch between actions, whereas a Graphical Processing Unit(GPU) is designed to do the same calculations repetitively while giving large increases in performance. The stacks in the listed papers, while giving exponentially higher speeds, did not use modern designs or graphics cards, which hindered them from running even faster.

http://www.azintablog.com/2010/10/16/gpu-large-scale-data-mining/

The GPU (Graphics Prossessing Unit) is changing the face of large scale data mining by significantly speeding up the processing of data mining algorithms. For example, using the K-Means clustering algorithm, the GPU-accelerated version was found to be 200x-400x faster than the popular benchmark program MimeBench running on a single core CPU, and 6x-12x faster than a highly optimised CPU-only version running on an 8 core CPU workstation.

These GPU-accelerated performance results also hold for large data sets. For example in 2009 data set with 1 billion 2-dimensional data points and 1,000 clusters, the GPU-accelerated K-Means algorithm took 26 minutes (using a GTX 280 GPU with 240 cores) whilst the CPU-only version running on a single-core CPU workstation, using MimeBench, took close to 6 days (see research paper “Clustering Billions of Data Points using GPUs” by Ren Wu, and Bin Zhang, HP Laboratories). Substantial additional speed-ups are expected were the tests conducted today on the latest Fermi GPUs with 480 cores and 1 TFLOPS performance.

Over the last two years hundreds of research papers have been published, all confirming the substantial improvement in data mining that the GPU delivers. I will identify a further 7 data mining algorithms where substantial GPU acceleration have been achieved in the hope that it will stimulate your interest to start using GPUs to accelerate your data mining projects:

Hidden Markov Models (HMM) have many data mining applications such as financial economics, computational biology, addressing the challenges of financial time series modelling (non-stationary and non-linearity), analysing network intrusion logs, etc. Using parallel HMM algorithms designed for the GPU, researchers (see cuHMM: a CUDA Implementation of Hidden Markov Model Training and Classification by Chaun Lin, May 2009) were able to achieve performance speedup of up to 800x on a GPU compared with the time taken on a single-core CPU workstation.

Sorting is a very important part of many data mining application. Last month Duane Merrill and Andrew Grinshaw (from University of Virginia) reported achieving a very fast implementation of the radix sorting method and was able to exceed 1G keys/sec average sort rate on an the GTX480 (NVidia Fermi GPU). Seehttp://goo.gl/wpra

Density-based Clustering is an important paradigm in clustering since typically it is noise and outlier robust and very good at searching for clusters of arbitrary shape in metric and vector spaces. Tests have shown that the GPU speed-up ranged from 3.5x for 30k points to almost 15x for 2 million data points. A guaranteed GPU speedup factor of at least 10x was obtained on data sets consisting of more than 250k points. (See “Density-based Clustering using Graphics Processors” by Christian Bohm et al).

Similarity Join is an important building block for similarity search and data mining algorithms. Researchers using a special algorithm called Index-supported similarity join for the GPU to outperform the CPU by a factor of 15.9x on 180 Mbytes of data (See “Index-supported Similarity Join on Graphics Processors” by Christian Bohm et al).

Bayesian Mixture Models has applications in many areas and of particular interest is the Bayesian analysis of structured massive multivariate mixtures with large data sets. Recent research work (see “Understanding the GPU Programming for Statistical Computation: Studies in Massively Massive Mixtures” by Marc Suchard et al.) has demonstrated that an old generation GPU (GeForce GTX285 with 240 cores) was able to achieve a 120x speed-up over a quad-core CPU version.

Support Vector Machines (SVM) has many diverse data mining uses including classification and regression analysis. Training SVM and using them for classification remains computationally intensive. The GPU version of a SVM algorithm was found to be 43x-104x faster than SVM CPU version for building classification models and 112x-212x faster over SVM CPU version for building regression models. See “GPU Accelerated Support Vector Machines for Mining High-Throughput Screening Data” by Quan Liao, Jibo Wang, et al.

Kernel Machines. Algorithms based on kernel methods play a central part in data mining including modern machine learning and non-parametric statistics. Central to these algorithms are a number of linear operations on matrices of kernel functions which take as arguments the training and testing data. Recent work (See “GPUML: Graphical processes for speeding up kernel machines” by Balaji Srinivasan et al. 2009) involves transforming these Kernel Machines into parallel kernel algorithms on a GPU and the following are two example where considerable speed-ups were achieved; (1) To estimate the densities of 10,000 data points on 10,000 samples. The CPU implementation took 16 seconds whilst the GPU implementation took 13ms which is a significant speed-up will in excess of 1,230x; (2) In a Gaussian process regression, for regression 8 dimensional data the GPU took 2 seconds to make predictions whist the CPU version took hours to make the same prediction which again is a significant speed-up over the CPU version.

If you want to use the GPUs but you do not want to get your hands “dirty” writing CUDA C/C++ code (or other languages bindings such as Python, Java, .NET, Fortran, Perl, or Lau) then consider using MATLAB Parallel Computing Toolbox. This is a powerful solution for those who know MATLAB. Alternatively R now has GPU plugins. A subsequent post will cover using MATLAB and R for GPU accelerated data mining.

These are space whales flying through the sun:

Open Source Intelligence Analysis – Software, Methods, Resources

http://www.kurzweilai.net/intelligence-agencies-turn-to-crowdsourcing

Research firm Applied Research Associates has just launched a website, Global Crowd Intelligence, that invites the public to sign up and try their hand at intelligence forecasting, BBC Future reports.

The website is part of an effort called Aggregative Contingent Estimation, sponsored by the Intelligence Advanced Research Projects Activity (Iarpa), to understand the potential benefits of crowdsourcing for predicting future events by making forecasting more like a game of spy versus spy.

The new website rewards players who successfully forecast future events by giving them privileged access to certain “missions,” and also allowing them to collect reputation points, which can then be used for online bragging rights. When contributors enter the new site, they start off as junior analysts, but eventually progress to higher levels, allowing them to work on privileged missions.

The idea of crowdsourcing geopolitical forecasting is increasing in popularity, and not just for spies.  Wikistrat, a private company touted as “the world’s first massively multiplayer online consultancy,” was founded in 2002, and is using crowdsourcing to generate scenarios about future geopolitical events. It recently released a report based on a crowdsourced simulation looking at China’s future naval powers.

Warnaar says that Wikistrat’s approach appears to rely on developing “what-if scenarios,” rather than attaching a probability to a specific event happening, which is the goal of the Iarpa project.

Paul Fernhout put together a good open letter awhile back on the need for this, it seems IARPA has put some effort forward for this purpose:

http://www.phibetaiota.net/2011/09/paul-fernhout-open-letter-to-the-intelligence-advanced-programs-research-agency-iarpa/

A first step towards that could be for IARPA to support better free software tools for “crowdsourced” public intelligence work involving using a social semantic desktop for sensemaking about open source data and building related open public action plans from that data to make local communities healthier, happier, more intrinsically secure, and also more mutually secure. Secure, healthy, prosperous, and happy local (and virtual) communities then can form together a secure, healthy, prosperous, and happy nation and planet in a non-ironic way. Details on that idea are publicly posted by me here in the form of a Proposal Abstract to the IARPA Incisive Analysis solicitation: “Social Semantic Desktop for Sensemaking on Threats and Opportunities”

So what kind of tools can an amateur use for making sense of data?

Data Mining and ACH

Here is a basic implementation of ACH:

http://competinghypotheses.org/

Analysis of Competing Hypotheses (ACH) is a simple model for how to think about a complex problem when the available information is incomplete or ambiguous, as typically happens in intelligence analysis. The software downloadable here takes an analyst through a process for making a well-reasoned, analytical judgment. It is particularly useful for issues that require careful weighing of alternative explanations of what has happened, is happening, or is likely to happen in the future. It helps the analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult. ACH is grounded in basic insights from cognitive psychology, decision analysis, and the scientific method. It helps analysts protect themselves from avoidable error, and improves their chances of making a correct judgment.
http://www2.parc.com/istl/projects/ach/ach.html

RapidMiner – About 6% of data miners use it – Can use R as an extension with a GUI
http://rapid-i.com/content/view/281/225/

R – 46% of data miners use this – in some ways better than commercial software – I’m not sure what the limit of this software is, incredibly powerful
http://www.r-project.org/

Network Mapping

Multiple tools – Finding sets of key players in a network – Cultural domain analysis – Network visualization – Software for analyzing ego-network data – Software package for visualizing social networks
http://www.analytictech.com/products.htm

NodeXL is a free, open-source template for Microsoft® Excel® 2007 and 2010 that makes it easy to explore network graphs. With NodeXL, you can enter a network edge list in a worksheet, click a button and see your graph, all in the familiar environment of the Excel window.
http://nodexl.codeplex.com/

Stanford Network Analysis Platform (SNAP) is a general purpose, high performance system for analysis and manipulation of large networks. Graphs consists of nodes and directed/undirected/multiple edges between the graph nodes. Networks are graphs with data on nodes and/or edges of the network.
http://snap.stanford.edu/snap/index.html

*ORA is a dynamic meta-network assessment and analysis tool developed by CASOS at Carnegie Mellon. It contains hundreds of social network, dynamic network metrics, trail metrics, procedures for grouping nodes, identifying local patterns, comparing and contrasting networks, groups, and individuals from a dynamic meta-network perspective. *ORA has been used to examine how networks change through space and time, contains procedures for moving back and forth between trail data (e.g. who was where when) and network data (who is connected to whom, who is connected to where …), and has a variety of geo-spatial network metrics, and change detection techniques. *ORA can handle multi-mode, multi-plex, multi-level networks. It can identify key players, groups and vulnerabilities, model network changes over time, and perform COA analysis. It has been tested with large networks (106 nodes per 5 entity classes).Distance based, algorithmic, and statistical procedures for comparing and contrasting networks are part of this toolkit.
http://www.casos.cs.cmu.edu/projects/ora/

NetworkX is a Python language software package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.
http://networkx.lanl.gov/

Social Networks Visualizer (SocNetV) is a flexible and user-friendly tool for the analysis and visualization of Social Networks. It lets you construct networks (mathematical graphs) with a few clicks on a virtual canvas or load networks of various formats (GraphViz, GraphML, Adjacency, Pajek, UCINET, etc) and modify them to suit your needs. SocNetV also offers a built-in web crawler, allowing you to automatically create networks from all links found in a given initial URL.
http://socnetv.sourceforge.net/

SUBDUE is a graph-based knowledge discovery system that finds structural, relational patterns in data representing entities and relationships. SUBDUE represents data using a labeled, directed graph in which entities are represented by labeled vertices or subgraphs, and relationships are represented by labeled edges between the entities. SUBDUE uses the minimum description length (MDL) principle to identify patterns that minimize the number of bits needed to describe the input graph after being compressed by the pattern. SUBDUE can perform several learning tasks, including unsupervised learning, supervised learning, clustering and graph grammar learning. SUBDUE has been successfully applied in a number of areas, including bioinformatics, web structure mining, counter-terrorism, social network analysis, aviation and geology.
http://ailab.wsu.edu/subdue/

A range of tools for social network analysis, including node and graph-level indices, structural distance and covariance methods, structural equivalence detection, p* modeling, random graph generation, and 2D/3D network visualization.(R based)
http://cran.us.r-project.org/web/packag … index.html

statnet is a suite of software packages for network analysis that implement recent advances in the statistical modeling of networks. The analytic framework is based on Exponential family Random Graph Models (ergm). statnet provides a comprehensive framework for ergm-based network modeling, including tools for model estimation, model evaluation, model-based network simulation, and network visualization. This broad functionality is powered by a central Markov chain Monte Carlo (MCMC) algorithm. (Requires R)
http://statnetproject.org/

Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Tulip aims to provide the developer with a complete library, supporting the design of interactive information visualization applications for relational data that can be tailored to the problems he or she is addressing.
http://tulip.labri.fr/TulipDrupal/

GraphChi is a spin-off of the GraphLab ( http://www.graphlab.org ) -project from the Carnegie Mellon University. It is based on research by Aapo Kyrola (http://www.cs.cmu.edu/~akyrola/) and his advisors.

GraphChi can run very large graph computations on just a single machine, by using a novel algorithm for processing the graph from disk (SSD or hard drive). Programs for GraphChi are written in the vertex-centric model, proposed by GraphLab and Google’s Pregel. GraphChi runs vertex-centric programs asynchronously (i.e changes written to edges are immediately visible to subsequent computation), and in parallel. GraphChi also supports streaming graph updates and removal of edges from the graph. Section ‘Performance’ contains some examples of applications implemented for GraphChi and their running times on GraphChi.

The promise of GraphChi is to bring web-scale graph computation, such as analysis of social networks, available to anyone with a modern laptop. It saves you from the hassle and costs of working with a distributed cluster or cloud services. We find it much easier to debug applications on a single computer than trying to understand how a distributed algorithm is executed.

In some cases GraphChi can solve bigger problems in reasonable time than many other available distributed frameworks. GraphChi also runs efficiently on servers with plenty of memory, and can use multiple disks in parallel by striping the data.
https://code.google.com/p/graphchi/

Web Based Stuff:

Play amateur Gestapo from the comfort of your living room:
http://littlesis.org/
http://theyrule.net/

Search Professionals by Name, Company or Title, painfully verbose compared to the above 2 tools
http://www.marketvisual.com/

Broad list of search engines

http://en.wikipedia.org/wiki/List_of_search_engines

&

http://www.wired.com/business/2009/06/coolsearchengines/

A tool that uses Palantir Government:
https://analyzethe.us

connected with the following datasets:
http://www.usaspending.gov
http://www.data.gov/
http://www.opensecrets.org/
https://www.epls.gov/
and some misc. others

Database Listings

http://www.forecastingprinciples.com/index.php?option=com_content&view=article&id=8&Itemid=18

http://www.datawrangling.com/some-datasets-available-on-the-web

http://datamarket.com/

Analytic Methods:

THIS BLOG IS PART OF CLASS PROJECT TO EXPLORE VARIOUS ANALYTIC TECHNIQUES USED BY MODERN INTELLIGENCE ANALYSTS (DELICIOUS ALL CAPS)
http://advat.blogspot.co.uk/

Morphological Analysis – A general method for non-quantified modeling
http://www.swemorph.com/pdf/gma.pdf

Modeling Complex Socio-Technical Systems using Morphological Analysis
http://www.swemorph.com/pdf/it-webart.pdf

CIA Tradecraft Manual

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/Tradecraft%20Primer-apr09.pdf

Top 5 Intelligence Analysis Methods: Analysis Of Competing Hypotheses
http://sourcesandmethods.blogspot.com/2008/12/top-5-intelligence-analysis-methods_19.html
(the author scores a 4.4 of 5 on http://www.ratemyprofessors.com/ShowRatings.jsp?tid=545372 , 2.4 on the easiness scale)

http://en.wikipedia.org/wiki/Intelligence_analysis#Analytic_tradecraft
Many new analysts find that getting started is the hardest part of their job. Stating the objective, from the consumer’s standpoint, is an excellent starting point. If the analyst cannot define the consumer and his needs, how is it possible to provide analysis that complements what the consumer already knows.

“Ambassador Robert D. Blackwill … seized the attention of the class of some 30 [intelligence community managers] by asserting that as a policy official he never read … analytic papers. Why? “Because they were nonadhesive.” As Blackwill explained, they were written by people who did not know what he was trying to do and, so, could not help him get it done:
“When I was working at State on European affairs, for example, on certain issues I was the Secretary of State. DI analysts did not know that–that I was one of a handful of key decision makers on some very important matters….”

More charitably, he now characterizes his early periods of service at the NSC Staff and in State Department bureaus as ones of “mutual ignorance”

“DI analysts did not have the foggiest notion of what I did; and I did not have a clue as to what they could or should do.”[6]
Blackwill explained how he used his time efficiently, which rarely involved reading general CIA reports. “I read a lot. Much of it was press. You have to know how issues are coming across politically to get your job done. Also, cables from overseas for preparing agendas for meetings and sending and receiving messages from my counterparts in foreign governments. Countless versions of policy drafts from those competing for the President’s blessing. And dozens of phone calls. Many are a waste of time but have to be answered, again, for policy and political reasons.

“One more minute, please, on what I did not find useful. This is important. My job description called for me to help prepare the President for making policy decisions, including at meetings with foreign counterparts and other officials…. Do you think that after I have spent long weeks shaping the agenda, I have to be told a day or two before the German foreign minister visits Washington why he is coming?”

%d bloggers like this: