Categories
Intelligence technology

Open Source Intelligence Analysis – We NSA Now

Working Thoughts:

1. Wikileaks can act as a secondary database. What we’ve seen so far makes it clear that most of the classified material is common knowledge but it could be useful.
2. Robert Steele is right that the humanitarian goodwill approach is superior. We’ve spent a lot of money in Afghanistan, but most of it was spent in unpopulated areas that were safe, the people who needed it didn’t get it. Lots of corruption. A tighter approach could be made.
3. Fiverr and penpal sites can also be useful for general cultural understanding or simple local tasks, e.g. : http://fiverr.com/worryfustion/help-you-learn-about-the-ethnic-groups-in-vietnam

http://fiverr.com/vann97/answer-10-questions-in-great-details-about-vietnam
4. Nearly all current prediction markets operate as zero-sum or negative-sum markets.


More OSINT Links:

“Dradis is a self-contained web application that provides a centralised repository of information to keep track of what has been done so far, and what is still ahead.”

http://dradisframework.org/

Links for OSINT (Open Source Intelligence) by Randolph Hock
http://www.onstrat.com/osint/

City Data:
http://www.city-data.com/

Public Records:
http://publicrecords.onlinesearches.com/

Name/Location Search Engine:
https://pipl.com/

“creepy is an application that allows you to gather geolocation related information about users from social networking platforms and image hosting services. The information is presented in a map inside the application where all the retrieved data is shown accompanied with relevant information (i.e. what was posted from that specific location) to provide context to the presentation.”
http://ilektrojohn.github.com/creepy/

Here is a recent example that uses the Palantir platform and OSINT:

Less than four months ago, the Southern portion of Sudan seceded and formed South Sudan, only the 5th country to be created this century. In this session, we will demonstrate how Palantir can draw from a plethora of Open Source Intelligence (OSINT) data sources (including academic research, blogs, news media, NGO reports and United Nations studies) to rapidly construct an understanding of the conflict underlying this somewhat anomalous 21st Century event. Using a suite of Palantir Helpers developed for OSINT analysis, the video performs relational, temporal, statistical, geospatial, and social network analysis of over a dozen open sources of data.

See also:

Detecting Emergent Conflicts through Web Mining and Visualization

https://www.recordedfuture.com/assets/Detecting-Emergent-Conflicts-through-Web-Mining-and-Visualization.pdf

&

Maltego

http://www.paterva.com/web6/

Categories
Intelligence technology

Open Source Intelligence Analysis – Palantir Does Indeed Kick Ass

Messing around with the Palantir Government suite right now. You can get an account and mess around with it here:

https://analyzethe.us/

You have the ability to import/export data, filter access, set up collaborative teams and access to the open archives of the US Gov and some non profits. There are two tiers of users, novice users and power users:

Workspace Operations
Restrictions for Novice Users
Importing data

Novice users can only import data that is correctly mapped to the deployment ontology. Power users are exempt from this restriction.

The maximum number of rows in structured data sources that a Novice user can imported at one time is restricted by the NOVICE_IMPORT_STRUCTURED_MAX_ROWS system property. The default value for this property is 1000.

The maximum size of unstructured data sources that can be imported by a Novice user at one time is restricted by the NOVICE_IMPORT_UNSTRUCTURED_MAX_SIZE_IN_MB system property. The default value for this property is 5 megabytes.
Tagging text

The maximum number of tags that a Novice user can create using the Find and Tag helper is restricted by the system property NOVICE_FIND_AND_TAG_MAX_TAGS. The default setting for this property is 50.

Novice users cannot access the Tag All Occurrences in Tab option in the Browser’s Tag As dialog.
SearchAround search templates

Novice users cannot import SearchAround Templates from XML files.

Novice users cannot publish SearchAround templates for use by the entire deployment, and cannot edit published templates.
All other SearchAround features remain available.
Resolving Nexus Peering data conflicts
The Pending Changes application is available only in the Palantir Enterprise Platform, and is only accessible to Workspace users who belong to the Nexus Peering Data Managers user group.
Nexus Peering Data Managers use the Pending Changes application to check for, analyze, and resolve data conflicts that are not automatically resolved when a local nexus is synchronized with a peered nexus.
Deleting objects

Novice users cannot delete published objects.

Novice users cannot delete objects created or changed by other users.
Resolving objects

The maximum number of objects that Novice users can resolve together at one time is restricted by the NOVICE_RESOLVE_MAX_OBJECTS system property. This restriction does not apply to objects resolved by using existing object resolution suites in the Object Resolution Wizard or during data import.

Novice users may use the Object Resolution Wizard only when using existing object resolution suites. Novice users cannot perform Manual Object Resolution, and cannot record new resolution criteria as an Object Resolution Suite.
To learn more, see Resolving and Unresolving Objects in Workspace: Beyond the Basics.
Map application restrictions
All map metadata tools in the Layers helper are restricted.
Novice users cannot access features that allow sorting of layers by metadata, coloring by metadata, or the creation of new metadata. All other Layer helper functions remain available.

In case you didn’t get what I just said, you have access the same tools the FBI and CIA use, except some minor limitations and no access to classified documents. If you have access to Wolfram Alpha/Mathematica and can google for history on your topic of interest then most of the classified files will become redundant.

What about data mining on a budget?

Consider relying on a GPU(s). A CPU is designed to be multitasker that can quickly switch between actions, whereas a Graphical Processing Unit(GPU) is designed to do the same calculations repetitively while giving large increases in performance. The stacks in the listed papers, while giving exponentially higher speeds, did not use modern designs or graphics cards, which hindered them from running even faster.

http://www.azintablog.com/2010/10/16/gpu-large-scale-data-mining/

The GPU (Graphics Prossessing Unit) is changing the face of large scale data mining by significantly speeding up the processing of data mining algorithms. For example, using the K-Means clustering algorithm, the GPU-accelerated version was found to be 200x-400x faster than the popular benchmark program MimeBench running on a single core CPU, and 6x-12x faster than a highly optimised CPU-only version running on an 8 core CPU workstation.

These GPU-accelerated performance results also hold for large data sets. For example in 2009 data set with 1 billion 2-dimensional data points and 1,000 clusters, the GPU-accelerated K-Means algorithm took 26 minutes (using a GTX 280 GPU with 240 cores) whilst the CPU-only version running on a single-core CPU workstation, using MimeBench, took close to 6 days (see research paper “Clustering Billions of Data Points using GPUs” by Ren Wu, and Bin Zhang, HP Laboratories). Substantial additional speed-ups are expected were the tests conducted today on the latest Fermi GPUs with 480 cores and 1 TFLOPS performance.

Over the last two years hundreds of research papers have been published, all confirming the substantial improvement in data mining that the GPU delivers. I will identify a further 7 data mining algorithms where substantial GPU acceleration have been achieved in the hope that it will stimulate your interest to start using GPUs to accelerate your data mining projects:

Hidden Markov Models (HMM) have many data mining applications such as financial economics, computational biology, addressing the challenges of financial time series modelling (non-stationary and non-linearity), analysing network intrusion logs, etc. Using parallel HMM algorithms designed for the GPU, researchers (see cuHMM: a CUDA Implementation of Hidden Markov Model Training and Classification by Chaun Lin, May 2009) were able to achieve performance speedup of up to 800x on a GPU compared with the time taken on a single-core CPU workstation.

Sorting is a very important part of many data mining application. Last month Duane Merrill and Andrew Grinshaw (from University of Virginia) reported achieving a very fast implementation of the radix sorting method and was able to exceed 1G keys/sec average sort rate on an the GTX480 (NVidia Fermi GPU). Seehttp://goo.gl/wpra

Density-based Clustering is an important paradigm in clustering since typically it is noise and outlier robust and very good at searching for clusters of arbitrary shape in metric and vector spaces. Tests have shown that the GPU speed-up ranged from 3.5x for 30k points to almost 15x for 2 million data points. A guaranteed GPU speedup factor of at least 10x was obtained on data sets consisting of more than 250k points. (See “Density-based Clustering using Graphics Processors” by Christian Bohm et al).

Similarity Join is an important building block for similarity search and data mining algorithms. Researchers using a special algorithm called Index-supported similarity join for the GPU to outperform the CPU by a factor of 15.9x on 180 Mbytes of data (See “Index-supported Similarity Join on Graphics Processors” by Christian Bohm et al).

Bayesian Mixture Models has applications in many areas and of particular interest is the Bayesian analysis of structured massive multivariate mixtures with large data sets. Recent research work (see “Understanding the GPU Programming for Statistical Computation: Studies in Massively Massive Mixtures” by Marc Suchard et al.) has demonstrated that an old generation GPU (GeForce GTX285 with 240 cores) was able to achieve a 120x speed-up over a quad-core CPU version.

Support Vector Machines (SVM) has many diverse data mining uses including classification and regression analysis. Training SVM and using them for classification remains computationally intensive. The GPU version of a SVM algorithm was found to be 43x-104x faster than SVM CPU version for building classification models and 112x-212x faster over SVM CPU version for building regression models. See “GPU Accelerated Support Vector Machines for Mining High-Throughput Screening Data” by Quan Liao, Jibo Wang, et al.

Kernel Machines. Algorithms based on kernel methods play a central part in data mining including modern machine learning and non-parametric statistics. Central to these algorithms are a number of linear operations on matrices of kernel functions which take as arguments the training and testing data. Recent work (See “GPUML: Graphical processes for speeding up kernel machines” by Balaji Srinivasan et al. 2009) involves transforming these Kernel Machines into parallel kernel algorithms on a GPU and the following are two example where considerable speed-ups were achieved; (1) To estimate the densities of 10,000 data points on 10,000 samples. The CPU implementation took 16 seconds whilst the GPU implementation took 13ms which is a significant speed-up will in excess of 1,230x; (2) In a Gaussian process regression, for regression 8 dimensional data the GPU took 2 seconds to make predictions whist the CPU version took hours to make the same prediction which again is a significant speed-up over the CPU version.

If you want to use the GPUs but you do not want to get your hands “dirty” writing CUDA C/C++ code (or other languages bindings such as Python, Java, .NET, Fortran, Perl, or Lau) then consider using MATLAB Parallel Computing Toolbox. This is a powerful solution for those who know MATLAB. Alternatively R now has GPU plugins. A subsequent post will cover using MATLAB and R for GPU accelerated data mining.

These are space whales flying through the sun:

Categories
technology

Replying To John Robb on Drones, Self-Driving Cars

John’s post:

I spent some time on the phone with a reporter from Inc. Magazine last week.  We were discussing the future of entepreneurship and where new opportunity could be found.

He asked me about drones and if there were opportunities for entrepreneurs there.

I told him that there were only two places where drones were going to gain traction:

  • Security.   From military operations to intelligence gathering to police surveillance.
  • DiY.   People building their own drones and finding interesting ways to use them.

That’s it.

Why?

All of the other uses of drones are closed off due to legal restrictions:

  • Drones for passenger transport.  It’s pretty clear that drones could be used to transport passengers safely and at much less cost than a manned aircraft.  It won’t happen.  Too many legal implications and push back from unions.
  • Drones for private info gathering.  Currently prevented.  There’s going to be legal wrangling over this for decades, which will prevent an industry from forming (other than “security” related).
  • Drones for short haul delivery/transport.  Too difficult to overcome the legal ramifications of operating drones on a mass scale near to homes/buildings.  It will definitely be used in the military.

Much of same logic is going to be applied to other forms of autonomous robotics.  For example: robots can drive a car better than a human being.  Google proved that already with their mapping car.  Will it be common to see “automated” cars in the next decade.  Probably not.  The first person killed by one will kill the industry through lawfare. Link

My response:

I doubt it John. I think you have over-weighted the power of lawsuits versus innovation.

Usually when there are legal blockages to technology, it holds the technology back for a short time until it finds a way around it. See the transition from embryonic stem cells to skin stem cells.

For cars in particular, people want safety in their cars much more than energy efficiency. That’s part of the reason SUV’s have outsold the economy car designs, most of the models are much more secure, barring the extremely large ones that are vulnerable to tipping over.

Recently smaller SUVs that combine the two attributes have become the most popular design. The demand for safety in automobiles has always been extreme, and in the case of the Google car Sebastian Thrun has been extremely careful in making sure the cars don’t have any accidents even in testing.

People put more trust into tech companies like Google than they do the legal system or congress, by a wide margin:
http://www.edelman.com/news/trust-in-government-suffers-a-severe-breakdown-across-the-globe/

Nevada has already legalized self driving cars, California is following.

So you assume that:
1. People’s desire for safety in consumer choices will be outweighed by their desire for control
2. There is no judo move to counter regulations, as there has been for many past technologies
3. That current friendly regulations are a smoke-screen for a coming crackdown
4. That people won’t trust the Google brand in particular in the case of the car
5. That a small number of lawsuits can destabilize a potentially multi-billion dollar industry, in spite of there being a perfect safety record thus far

Categories
Uncategorized

Replying To Ion-Tom On Ideas for the Ultimate Strategy Game

He wrote a long post, but here is the beginning:

I want to see procedural content not just create the raw terrain maps, but thousands of tribes, nations and empires. I want it to generate language, culture, aesthetics, architecture, religion, scientific progress, humanity. I believe that many different game engines should connect to a single cloud data application in order to create persistent worlds. Strategy gaming should prepare for next generation graphics technology, neural network AI, and implement many different portals to access game information. Taking exponential trends into consideration, I want to see what a strategy game looks like when millions of semi-intelligent “agents” compete or collaborate for resources.

I’m talking about Guns, Germs and Steel in gaming form. It could answer questions about human settlement patterns. With different continental configurations, do certain types of regions always become colonial powers or is having many states in feudal competition becoming market powers all that is needed? Does this usually follow the parallel latitude crop theory? The game could have an arcade mode like the Civilization and an observer mode: set the stage and watch. Go back in time, change a few variables and watch the difference. Maybe I’m alone, but that type of concept excites me!

My response:

So one of the first thing’s that came to mind was Palantir’s software:

And taking their idea of reducing friction between human and computer to enhance human capabilities:http://www.palantir.com/2010/03/friction-in-human-computer-symbiosis-kasparov-on-chess/

Also, here’s an overview of some of the older high-dollar ABM projects that were implemented in the past, mostly using JAVA: http://home.comcast.net/~dshartley3/DIMEPMESIIGroup/ModelingSimulation.htm

Combat will change over time like you said. One of the models being used in real life is generational, 1st to 4th generation warfare (see: John Robb, Global Guerrillas), with theoritical 5th generation warfare. So you would have to choose how realistic or metaphorical you would render that action. You know, if you had a civilization event where spearmen destroy a tank, or if you could zoom straight down and play 1st person as the spearman trying to exploit the terrain, things like that. Then you move into modern things like winning hearts and minds, then into the vague world of secrecy and influencing complex systems in 5GW.

Economics will change over time.

If you’re creating language, then you’re also creating the box within which people think to some degree, so that creates a feedback loop with culture. So you have a have to go a Systems Dynamics approach with stacks, flows and feedback loops, and possibly going much further because you have to extend the model so far out.

You will have to measure emotional reactions as well, but the question is how. Are you going to use the traditional valence and arousal model, to model how a given population is reacting to stimulus? I’m thinking back to OpenCog’s agent and it’s ability to become “scared”, or inquisitive because it uses an economic attention allocation model.

I don’t think Jared Diamond’s theory is going to cut it for explaining everything to the degree of simulating this world though from a historical perspective either. Diverse agents who are connected with interdependent relationships that adapt over time.

Bleh, that’s all I got right now.

Also, if you could do that, it might be realistic and immersive enough that people would pay you to test and develop it, like a subscription based MMORPG-ish thing.

See also: Game Mechanics: Advanced Game Design by Ernest Adams and Joris Dormans

It’s on amazon and “other” sites.

If you wanted to introduce more variety into the game, instead of following a fixed technology tree, perhaps it could implement different concepts of the singularity towards “end game”. You could shift currency over time, as processes become automated you would move from a coin based economy to a paper one, then from paper to digital currency, then possibly to currency based off of:

  1. Energy – This may or may not be relevant, on the one hand advanced solar panels and cells, combined with nuclear power makes energy abundant. It remains to be seen how much power future computing will eat up. Current power drain on large systems comes not only from the computer, but the costs of cooling. Intel has been working on low power mobile processors:http://newsroom.intel.com/community/intel_newsroom/blog/2012/09/11/intel-low-power-processors-to-fuel-future-of-mobile-computing-innovation
  2. Antimatter: http://en.wikipedia.org/wiki/Antimatter#Cost Scientists claim that antimatter is the costliest material to make.[37] In 2006, Gerald Smith estimated $250 million could produce 10 milligrams of positrons[38] (equivalent to $25 billion per gram); in 1999, NASA gave a figure of $62.5 trillion per gram of antihydrogen.[37] This is because production is difficult (only very few antiprotons are produced in reactions in particle accelerators), and because there is higher demand for other uses of particle accelerators. According to CERN, it has cost a few hundred million Swiss Francs to produce about 1 billionth of a gram (the amount used so far for particle/antiparticle collisions).[39] Several NASA Institute for Advanced Concepts-funded studies are exploring whether it might be possible to use magnetic scoops to collect the antimatter that occurs naturally in the Van Allen belt of the Earth, and ultimately, the belts of gas giants, like Jupiter, hopefully at a lower cost per gram.[40]
  3. If computers take over most of the operations in a society, costs can be based off of CPU cycles

This raises the question of whether space exploration would be useful in this context. Bill Stone has an old but good TED talk on space exploration and his on-going work to journey to the moon and his plan to mine the fuel to return home from the moon itself: 

In the computers rule everything dynamic, Hugo de Garis has some interesting ideas on “Artilects”:http://profhugodegaris.wordpress.com/artilect-polls/

One field no one mentions: Hardware security. Trying to beat computers through software is nice and all, but there are many hardware bugs, BIOS rootkits (technically software), using firewire to pass through security, USB sticks with malware and keyloggers (which can be built into a keyboard). Unless an AI/AGI has built in defense to make the hardware difficult to get at, to self destruct, or reprise against attackers, then they are a vulnerable target.

Eclipse Phase also has interesting implementations.

His reply:

I like everything about what you just said. I’m familiar with the Van Allen Belt antimatter and Hugo de Garis but I wasn’t familiar with Bill Stone, that’s pretty awesome! And the Adams-Dormans book looks awesome! Currently I’m reading this book my friend lent me. (Glad I didn’t have to pay for it!)

I am a big proponent of a genetic algorithm based tech tree that builds momentum towards a singularity end game. Not sure how it could be implemented at first. In an ideal world the engine is sufficiently advanced to model physics and the agents experiment with the physics engine. For the short term, I think breaking all technologies into their basic components based on physics would give a good “lego set” for building technologies. Each component could have a weight. Every time it gets used in a technology it gets a stronger weight. Etc. The momentum builds.

So you think hardware security makes AGI vulnerable? I suppose it’s an engineering question. Right now it’s vulnerable, but so were the worlds first single cell organisms. I’ll bet security increases over time as neuromorphic chips become more complex; maybe not though.

Categories
Uncategorized

Replying To Nicholas Eftimiades On Intelligence at the Speed of Thought

The link to his original post is here:

For future national security needs, the most stressing intelligence requirements will be for remote-sensing systems to detect, track, cross-que, and characterize fleeting targets in real time. This ability will require a global network of sensors to detect and track individuals, vehicles, chemicals, materials, and emanations and a space network backbone to move data.  Pervasive CCTV systems now present worldwide in airports, border crossings, railroads, busses, and on the streets of many cities will be integrated and supported by powerful computers, smart software agents, vast facial pattern and retina recognition databases, and communications infrastructure. These systems will be integrated with sensors and databases detecting, identifying, and characterizing spectral signatures, chemical compositions, DNA, effluents, sounds, and much more.

My Response:

There’s an interesting piece of open source software called Eureqa that can search for hidden mathematical equations in data. It’s amazing how quickly supervised and unsupervised learning algorithms along with gesture recognition are developing.

Over time we’ll develop better long-range sensors to detect emotional valence and arousal, so we can judge the details of a persons emotional state and correlate it with the rest of the data. Thermal and hyperspectral imaging can be used to judge the bloodflow to an area like the face, indicating stress. We have simple things, EPS and heartbeat sensors, eye tracking software, but it’s developing over time. Simple Microsoft Kinect sensors just build a simple stick-figure skeleton, but newer sensors are being developed that have more potential. This input will likely improve agent based modeling software as we will be able to have actual emotions as inputs.

Satellite launch costs are going down and NASA is turning LEO over to the private sector, so we can expect an increase in space-based sensors and services. That might lead to better climate detection models and therefore better advanced hurricane/tornado warning times.

It also applies to AGI research, much of the data we learn from comes in from vision among other senses. So from the perspective of building an AGI, adding more sensors means it can get smarter in much faster and in entirely new ways. When you talk about integrating that level of sensory information and processing it, you end up with intelligence that makes the differences between an Einstein and a village idiot seem as tiny as a grain of sand.

We can already load sounds into programs like Wolfram Mathematica and analyze them, extract data and then plot, graph or connect the data in hundreds of other ways. I’m not as familiar with MatLab but I know it has a wide range of functions too.

Right now the main concern is reducing interface friction so humans and machines can work together properly, but every year more functions are being added to the software and more data is being captured. Eventually we’re going to need a significant step up in intelligence to be able to work with it.

Thinking more on it, the alternative may be that things will get easier to use.

Programming languages have become somewhat simpler over time, as compilers catch up with being able to handle memory as good or better than humans going into things at the C/C++ level won’t be required, as long as there aren’t incompatibility issues. GUI’s have gotten better over the years as well.

I wonder how humans will choose to control access and connections between their AI/AGI programs as time goes on. The newer generation isn’t as concerned about privacy and are willing to give out tons of data on twitter and facebook.

Another area that’s still very empty: implant security. Most of these things can pick up wireless signals now, hackers have already figured out ways to mess with pacemakers and the like. Ditto for self-driving cars, we’ve already had guys hacking GPS’s to make them give false data. We’re going to have attack v. defense issues, security versus accessibility, the works.

(Technical note: Kinect skeleton drawing is done with the software, but improvements to hardware will effect it’s capabilities)

His response:

I agree with most of what you wrote but I don’t think lowering launch is going to lead to better climate detection models. Those space based sensors are excellent now. That is more a function of computing power and airborne/ground based sensors. And an increase in space-based sensors and services is going to be more a function of electronics miniaturization allowing more capability in orbit for the same launch price. But I agree launch costs will come down as well.

Also, Thermal and hyperspectral imaging can be used to judge the bloodflow to an area like the face but it is only useful if you have the spectral signature of that specific face at rest and under stress. Either that, or you have continuous monitoring and can watch the blood flows go up and down.

Implant security is an areas of concern. A UK college professor recently demonstrated infecting numerous devices with an imbedded bio chip.

Cool discussion.

Categories
hacker culture Intelligence International Affairs

Interview With Jacob Appelbaum, Member of Tor and Wikileaks

If you’re wondering why they have a microscope embedded so deeply in his ass, he used to be a spokesperson for Wikileaks and he’s also a member of the Cult of the Dead Cow. Hacktivist’s have a six-degrees of Kevin Bacon connection to Wikileaks, it’s likely that not all of the material they receive was purposefully leaked. After credit card companies and banks cut ties with Wikileaks, they were introduced to an extended DDoS attack. As he describe in the interview, looking at metadata and relationships between people, even when using open source information, has created reliatble simulations of outcomttes.

Some of it is as safe as we think it can be, and some of it is not safe at all. The number one rule of “signals intelligence” is to look for plain text, or signaling information—who is talking to whom. For instance, you and I have been emailing, and that information, that metadata, isn’t encrypted, even if the contents of our messages are. This “social graph” information is worth more than the content. So, if you use SSL-encryption to talk to the OWS server for example, great, they don’t know what you’re saying. Maybe. Let’s assume the crypto is perfect. They see that you’re in a discussion on the site, they see that Bob is in a discussion, and they see that Emma is in a discussion. So what happens? They see an archive of the website, maybe they see that there were messages posted, and they see that the timing of the messages correlates to the time you were all browsing there. They don’t need to know to break a crypto to know what was said and who said it.

Traffic analysis. It’s as if they are sitting outside your house, watching you come and go, as well as the house of every activist you deal with. Except they’re doing it electronically. They watch you, they take notes, they infer information by the metadata of your life, which implies what it is that you’re doing. They can use it to figure out a cell of people, or a group of people, or whatever they call it in their parlance where activists become terrorists. And it’s through identification that they move into specific targeting, which is why it’s so important to keep this information safe first.

For example, they see that we’re meeting. They know that I have really good operational security. I have no phone. I have no computer. It would be very hard to track me here unless they had me physically followed. But they can still get to me by way of you. They just have to own your phone, or steal your recorder on the way out. The key thing is that good operational security has to be integrated into all of our lives so that observation of what we’re doing is much harder. Of course it’s not perfect. They can still target us, for instance, by sending us an exploit in our email, or a link in a web browser that compromises each of our computers. But if they have to exploit us directly, that changes things a lot. For one, the NYPD is not going to be writing exploits. They might buy software to break into your computer, but if they make a mistake, we can catch them. But it’s impossible to catch them if they’re in a building somewhere reading our text messages as they flow by, as they go through the switching center, as they write them down. We want to raise the bar so much that they have to attack us directly, and then in theory the law protects us to some extent.

But iPhones, for instance, don’t have a removable battery; they power off via the power button. So if I wrote a backdoor for the iPhone, it would play an animation that looked just like a black screen. And then when you pressed the button to turn it back on it would pretend to boot. Just play two videos. Link

Categories
economics technology

Peter Thiel and George Gilder debate on “The Prospects for Technology and Economic Growth”

alt link:

http://www.youtube.com/watch?v=XRrLyckg8Nc&feature=related

Categories
technology

The Engineer’s Proverb

You can’t make a baby in 1 month with 9 women.

Categories
Hydroponics technology

Japanese Company Lets Plants Grow On Thin Films Instead Of Soil

Video demo:

http://techcrunch.com/2011/08/15/imec-japanese-company-lets-plants-grow-on-thin-films-instead-of-soil-video/

Mebiol says that tomatoes, radish, cucumber, melons etc. need up to 80% less water to grow when compared with conventional culture and that 1g of SkyGel (that’s the brand name of the hydrogel) absorbs and holds 100ml of water. In contrast to soil, bacteria or viruses have no chance to harm the plants. Another advantage is that SkyGel can be used on various surfaces, including sand, concrete or ice (see this PDF for examples from recent years).

The film can be used to grow plants for 2-3 years before it needs to be replaced, according to the company.

Categories
technology

Whole Brain Emulation – Randal Koene

http://vimeo.com/17096145

Categories
technology

Artificial Wombs

A team led by Professor of Tissue Engineering, Kevin Shakesheff, has created a new device in the form of a soft polymer bowl which mimics the soft tissue of the mammalian uterus in which the embryo implants. The research has been published in the journal Nature Communications.

This new breakthrough is part of a major research effort at Nottingham to learn how the development of the embryo can teach us how to repair the adult body. The work is led by Professor Kevin Shakesheff with funding from European Research Council.

Professor Shakesheff added: “Everyone reading this article grew themselves from a single cell. With weeks of the embryo forming all of the major tissues and organs are formed and starting to function. If we could harness this remarkable ability of the human body to self-form then we could design new medical treatments that cure diseases that are currently untreatable. For example, diseases and defects of the heart could be reversed if we could recreate the process by which cardiac muscle forms and gets wired into the blood and nervous system.” Link

There are two commonly cited endeavors in progess. Focusing on finding ways to save premature babies, Japanese professor Dr. Yoshinori Kuwabara of Juntendo University, has successfully gestated goat embryos in a machine that holds amniotic fluid in tanks.  On the other end of the process focusing on helping women unable to conceive and gestate babies, is Dr. Helen Hung-Ching Liu, Director of the Reproductive Endocrine Laboratory at the Center for Reproductive Medicine and Infertility at Cornell University.  Quietly, in 2003, she and her team succeeded in growing a mouse embryo, almost to full term, by adding engineered endometrium tissue to a bio-engineered, extra-uterine “scaffold.” More recently, she grew a human embryo, for ten days in an artificial womb. Her work is limited by legislation that imposes a 14-day limit on research project of this nature.  As complicated as it is, her goal is a functioning external womb. Link

Categories
Future Trends technology

Why Framing Your Enemies Is Now Virtually Child’s Play

A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else’s stuff, then call the police. Link