<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: Indexing Reality: Creating a Mine of Geospatial Information	</title>
	<atom:link href="https://www.digitalurban.org/blog/2008/08/10/indexing-reality-creating-mine-of/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.digitalurban.org/blog/2008/08/10/indexing-reality-creating-mine-of/</link>
	<description>Data, Cities, IoT, Writing, Music and Making Things</description>
	<lastBuildDate>Wed, 20 Aug 2008 17:37:00 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
	<item>
		<title>
		By: Goulding		</title>
		<link>https://www.digitalurban.org/blog/2008/08/10/indexing-reality-creating-mine-of/#comment-2803</link>

		<dc:creator><![CDATA[Goulding]]></dc:creator>
		<pubDate>Wed, 20 Aug 2008 17:37:00 +0000</pubDate>
		<guid isPermaLink="false">http://digitalurban.net/?p=1603#comment-2803</guid>

					<description><![CDATA[EarthMine looks like they are doing some interesting stuff. I really like how one can measure anything one sees in a panorama view. This can prove invaluable when creating context models for 3D visualization.&lt;br/&gt;&lt;br/&gt;To me, the most powerful feature of their technology (as I understand it) would be to enable one to merge panoramas of existing conditions with 3D models of new interventions. &lt;br/&gt;&lt;br/&gt;We are constantly engaged in creating before and after images to convey our design intent. These can be tedious to set up in a 3D environment (using photo matching) and only support 1 static view at a time. What&#039;s more, foreground content from existing conditions needs to be masked manually.&lt;br/&gt;&lt;br/&gt;Since their technology appears to make use of real 3D coordinates for each panorama, I suspect it would be possible to view 3D building designs either in front of or behind existing buildings simply by comparing the relative distance from the camera (a kind of z-buffering if you will)?&lt;br/&gt;&lt;br/&gt;Being able to &#039;walk&#039; an existing street augmented with a new design intervention (which could be toggled on and off) seems to me like the ultimate before and after visualization. However, I have not seen any mention of this on their site - so perhaps it&#039;s not possible.]]></description>
			<content:encoded><![CDATA[<p>EarthMine looks like they are doing some interesting stuff. I really like how one can measure anything one sees in a panorama view. This can prove invaluable when creating context models for 3D visualization.</p>
<p>To me, the most powerful feature of their technology (as I understand it) would be to enable one to merge panoramas of existing conditions with 3D models of new interventions. </p>
<p>We are constantly engaged in creating before and after images to convey our design intent. These can be tedious to set up in a 3D environment (using photo matching) and only support 1 static view at a time. What&#8217;s more, foreground content from existing conditions needs to be masked manually.</p>
<p>Since their technology appears to make use of real 3D coordinates for each panorama, I suspect it would be possible to view 3D building designs either in front of or behind existing buildings simply by comparing the relative distance from the camera (a kind of z-buffering if you will)?</p>
<p>Being able to &#8216;walk&#8217; an existing street augmented with a new design intervention (which could be toggled on and off) seems to me like the ultimate before and after visualization. However, I have not seen any mention of this on their site &#8211; so perhaps it&#8217;s not possible.</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
