<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>3D Modelling Archives - Digital Urban</title>
	<atom:link href="https://www.digitalurban.org/blog/category/3d-modelling/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.digitalurban.org/blog/category/3d-modelling/</link>
	<description>Data, Cities, IoT, Writing, Music and Making Things</description>
	<lastBuildDate>Thu, 07 Aug 2025 15:34:12 +0000</lastBuildDate>
	<language>en-GB</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>TIME: An Open Source 3D Printed Clock</title>
		<link>https://www.digitalurban.org/blog/2025/08/05/time-an-open-source-3d-printed-clock/</link>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Tue, 05 Aug 2025 09:33:43 +0000</pubDate>
				<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[3D Printing]]></category>
		<guid isPermaLink="false">https://www.digitalurban.org/blog/2025/08/05/time-an-open-source-3d-printed-clock/</guid>

					<description><![CDATA[<p>We are about to release ‘TIME,’ our fully 3D-printed mechanical clock, on Printables. This post provides insight into both its creation and development.</p>
<p>The post <a href="https://www.digitalurban.org/blog/2025/08/05/time-an-open-source-3d-printed-clock/">TIME: An Open Source 3D Printed Clock</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">We have released ‘TIME,’ our fully 3D-printed mechanical clock, on Printables. This post provides insight into both its creation and development.</h2>



<p>Over the years, I have followed various clock makers online, looking at their designs, 3D printed models, and generally spending more time than I would like to admit trying to get a 3D printed clock running. Makers such as the excellent <a href="https://woodenclocks.co.uk/">Brian Law</a> (Wooden and 3D printed), <a href="https://www.stevesclocks.com/">Steve&#8217;s Clocks</a> (great insights and designs), <a href="https://engineezy.com/products/the-3d-printed-wall-clock?srsltid=AfmBOoo0TeeDPAo3nD324gp0kzedsIHfIQQR-34iwf9ZqKiXIjJkbbTH">JBV</a> (amazing, complex engineering) and <a href="https://wooden-gear-clocks.com/">Wooden Gear Clocks</a> (a lovely site for premade clocks which come in kits but still need more skills then it turns out i have in basic cutting/sanding of brass rods) have all inspired me, but have also driven my need for a simpler, open source design which would be free and easier to build.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>It has taken a year to perfect, mainly as a summer project last year (in between work) and refining it this year to reach a point where other people could make it.</p>
</blockquote>



<p>All 3D printed clocks are however never completely &#8216;easy&#8217; to build, they all need a few extra parts, mainly in the need to reduce friction with metal rods, bearings, but we have limited the parts to the minimum, made sure they are all off shelf parts and have also reduced the need for any sanding/cutting/ beyond one small part. We have also provided all the <a href="https://www.printables.com/model/1375086-time-a-3d-printed-clock">files available on Printables</a>, and incoming to Github, allowing others to refine and contribute new designs or updates. As time goes along, the clock will evolve &#8211; but for now it&#8217;s ready for its first release, and it’s called TIME.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/08/2ca3223f-279d-4a22-9cd6-5df9448a4e25_2400x1621.png" alt="" /></figure>



<p>We have tried and mainly failed with other designs online; as such, we wanted to build our own version, from first principles, not modifying others, but from the ground up, with the following aims:</p>



<ol class="wp-block-list">
<li>
<p>It should be as easy to build and replicate as possible</p>
</li>



<li>
<p>Any additional parts should be easy to source and low-cost</p>
</li>



<li>
<p>Cutting/Sanding should be limited</p>
</li>



<li>
<p>It should have a proper 1-second tick/tock sound &#8211; this was important.</p>
</li>
</ol>



<p>As such, the first thing to learn was how an Escapement Mechanism worked, how it contributes to the timing of a clock, the importance of the length of the pendulum and how it dictates the sound of the clock.</p>



<h3 class="wp-block-heading">First Steps: The Escapement</h3>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>While most people listen to music on their headphones on the way to work, last summer, I started listening to ChatGPT’s newly introduced voice mode to talk me through the history and detailed workings of a clock escapement mechanism. It allowed me to gain enough knowledge to start drawing my own deadbeat escapement.</p>
</blockquote>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/08/52c2b5c7-60b8-4f8e-8aa1-59b6b9773638_1806x1052.png" alt="" /></figure>



<p>A clock&#8217;s escapement is the heart of its mechanism, ingeniously translating the constant power from the gear train into the precise, rhythmic pulses that create the &#8220;tick-tock.&#8221; It performs two critical jobs: it allows the gears to &#8220;escape&#8221; forward one tooth at a time, regulating the speed of the hands, and it gives the pendulum a tiny push on each swing to overcome friction and keep it moving. The <strong>deadbeat escapement</strong>, perfected by George Graham around 1715, was a major leap in accuracy. Unlike earlier &#8220;recoil&#8221; escapements, where the escape wheel would kick backwards slightly after each tick, the deadbeat&#8217;s pallets are shaped so the teeth land &#8220;dead&#8221; with no recoil. This crucial improvement prevents the escapement from disturbing the pendulum&#8217;s natural, isochronous swing, making the clock significantly more accurate and establishing the design as the standard for precision regulator clocks. My 3D tool of choice to design mechanisms/enclosures for devices is Autodesk 360, as such, all I needed was a reference drawing and the thought that getting a working escapement would be a good first step. Over at <a href="https://www.abbeyclock.com/aeb3.html">Abbey Clock</a> there is an excellent guide on Drawing Graham Pallets &#8211; which led to our own slightly modified design and, as pictured below, our first working escapement:</p>



<p>We haven&#8217;t published our prototypes yet, but I can share them if people are interested in either the escapement above or the next stage &#8211; the Pomodoro Timer, below. The escapement mechanisim is powered by a weight and counterweight sytem with a 1 metre pendulum &#8211; this provided a way to perfect the initial &#8216;tick tock&#8217; of the clock, designed to provide a steady beat every 0.5 seconds, allowing the escapment to move forward, once every seconds and more importantly, make a full rotation once a minute.</p>



<h3 class="wp-block-heading">Gear Ratios: The Pomodoro Timer</h3>



<p>The second step was to add an additional gear and gain an understanding of gear ratios. Gear ratios for a clock are all-important as they not only define the number of gears you need, but also define<strong> the relationship between them</strong>, turning the fast-paced energy of the escapement into the slow, readable passage of time.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/08/765c7800-8baa-4dad-881f-f90faa58576e_2084x1156.png" alt="" /></figure>



<p>At its core, a gear ratio translates speed and torque between rotating shafts. In our clock, we have a known starting speed: our <strong>escape wheel&#8217;s shaft</strong> rotates once every minute (60 seconds). To build a timer that rings a bell every 25 minutes, we needed to create a gear train that would complete one full rotation in that time.</p>



<p>The maths to figure out the required gear ratio is straightforward:</p>



<ul class="wp-block-list">
<li>
<p><strong>Target Rotation Time</strong>: 25 minutes = 25×60=1500 seconds.</p>
</li>



<li>
<p><strong>Source Rotation Time</strong> (Escape Wheel Shaft): 60 seconds.</p>
</li>



<li>
<p><strong>Required Gear Ratio</strong>: Target Time​=601500​ or 25:1 &#8211; as we see later on, it is the ratio which is important, and arguably, easier to understand.</p>
</li>
</ul>



<p>As you can see in our prototype, we achieved this with a <strong>compound gear train</strong> made of two identical stages, which makes the design elegant and easy to replicate. A compound gear it a cluster of two or more gears of different sizes that are fixed together on the same shaft, forcing them to rotate at the same speed. In short, instead of having one massive gear drive a tiny one to get a big gear ratio, a compound gear lets you achieve the same result in stages.</p>



<p>This setup is the key to creating a <strong>gear train</strong> that can achieve a large change in speed or in a compact space</p>



<ol class="wp-block-list">
<li>
<p><strong>First Stage</strong>: The shaft from the escapement has an <strong>8-tooth pinion</strong> that drives a <strong>40-tooth gear</strong>. This gives a reduction of 840​=5:1.</p>
</li>



<li>
<p><strong>Second Stage</strong>: Mounted on the same shaft as the first 40-tooth gear, a second <strong>8-tooth pinion</strong> drives the final <strong>40-tooth gear</strong>. This provides another reduction of 840​=5:1.</p>
</li>
</ol>



<p>The total reduction is the product of the individual stages: 5×5=25:1. This ratio perfectly transforms the 60-second rotation of the escapement shaft into the 25-minute rotation needed to trigger the bell. Or at least that&#8217;s how it should be in a perfect world. To be honest, I experimented a little, and my timings were a little out, coming in at approximately 20 minutes per bell ring, with the weight running the timer for an hour. The main point is that I had extended out from the escapement and used compound gears to start using ratios for timings. Of note, Chat GPT was useful in the wider understanding of the clock mechanism and theory, but it would frequently miscalculate gear trains and ratios, so the final design was done with old-fashioned logic.</p>



<h3 class="wp-block-heading">TIME: 3D Printed Clock</h3>



<p>The final clock is simply a case of building out the number of gears, using the same logic as the Pomodoro Timer. I had an escapement rotating once a minute, and I needed the main gear rotating once an hour, so I could attach an hour hand to it &#8211; that’s a ratio of 60:1</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/08/7805b6ee-952f-4890-a295-16e1f69a886d_1624x1138.png" alt="" /></figure>



<p>Following that logic and a ratio of 60:1 &#8211;</p>



<ul class="wp-block-list">
<li>
<p><strong>Escapement</strong>: The Minute Gear &#8211; The escape wheel has 30 teeth, with a 10-tooth pinion on its shaft, delivering a 1-second &#8220;tick-tock&#8221; rhythm. The escape wheel completes one full rotation every minute.</p>
</li>



<li>
<p><strong>Gear 1</strong>: 40-tooth wheel (driven by the 10-tooth pinion) and 20-tooth pinion.</p>
</li>



<li>
<p><strong>Gear 2</strong>: 60-tooth wheel (driven by the 20-tooth pinion) and 24-tooth pinion.</p>
</li>



<li>
<p><strong>Gear 3</strong>: The Hour Gear- a 120-tooth wheel (driven by the 24-tooth pinion), which completes one rotation per hour and connects to a drive gear (details on the drive gear to follow).</p>
</li>
</ul>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/08/e8b2ae7d-1a4e-4c6c-8f28-717fa153c18f_1656x1214.png" alt="" /></figure>



<p>The gear ratios provide the necessary speed reduction to convert the escapement&#8217;s motion into hourly rotation. Starting from the escapement:</p>



<ul class="wp-block-list">
<li>
<p>The 10-tooth pinion drives the 40-tooth wheel of Gear 1, creating a 40:10 (or 4:1) reduction ratio—Gear 1 rotates at 1/4 the speed of the escape wheel.</p>
</li>



<li>
<p>Gear 1&#8217;s 20-tooth pinion drives Gear 2&#8217;s 60-tooth wheel, a 60:20 (or 3:1) ratio—Gear 2 rotates at 1/3 the speed of Gear 1.</p>
</li>



<li>
<p>Gear 2&#8217;s 24-tooth pinion drives Gear 3&#8217;s 120-tooth wheel, a 120:24 (or 5:1) ratio—Gear 3 rotates at 1/5 the speed of Gear 2.</p>
</li>
</ul>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>It all suddenly seems complicated &#8211; but if you look at it more simply, the cumulative reduction ratio is 4 × 3 × 5 = 60:1. Since the escape wheel rotates once per minute, Gear 3 rotates once every 60 minutes—or exactly once per hour.</p>
</blockquote>



<p>I could have chosen any ratio for the gears as long as it works out to 60:1.</p>



<p>With a main gear turning once an hour, upon which an hour hand can be attached, all that is now needed is a Drive Gear to power the clock and a small subset gear on which to put the minute hand.</p>



<h4 class="wp-block-heading">The Drive</h4>



<p>The drive gear serves a dual purpose as a catch mechanism, enabling the clock to be wound up while preventing unintended unwinding. This gear, connected to the drum, includes a ratchet and pawl system. When winding the clock, the ratchet allows the drum to rotate in the winding direction, lifting a weight of 2kg. The pawl engages the ratchet teeth, locking the drum in place to stop it from unwinding backwards once the winding is complete. This ensures the stored energy remains secure, releasing only through the controlled escapement and gear train during operation.</p>



<h4 class="wp-block-heading">The Minute Hand</h4>



<p>Finally, there is a separate gear chain for the minute hand, which fits behind the hour hand; this allows the traditional hour and minute hand configuration (the hour hand fits on a brass rod going through the hour gear, holding everything in place while also allowing it to freely move.</p>



<p>And that’s it! It seems simple when you break it down, although designing and building it from scratch took a little more time than I thought, and I lost count of the iterations it took to get here. The design will continue to be refined—perhaps with the addition of an hourly chime in the near future.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/08/11e5785d-98b9-45ad-89e8-029a75addc09_1526x1184.png" alt="" /></figure>



<p>For now, we hope you enjoy making the clock. Do let us know if you build one, over at <a href="https://www.printables.com/model/1375086-time-a-3d-printed-clock">Printables</a> (the 3D printed files are incoming, we are just collating the files)…</p>
<p>The post <a href="https://www.digitalurban.org/blog/2025/08/05/time-an-open-source-3d-printed-clock/">TIME: An Open Source 3D Printed Clock</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Vibe Coding: From iOS Apps to an Asteroid Clock and 3D Cities</title>
		<link>https://www.digitalurban.org/blog/2025/07/16/vibe-coding-from-ios-apps-to-an-asteroid-clock-and-3d-cities/</link>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Wed, 16 Jul 2025 20:38:17 +0000</pubDate>
				<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[Posts]]></category>
		<category><![CDATA[Gemini]]></category>
		<category><![CDATA[iOS App]]></category>
		<category><![CDATA[LLM]]></category>
		<category><![CDATA[Vibe Coding]]></category>
		<category><![CDATA[Weather]]></category>
		<guid isPermaLink="false">https://www.digitalurban.org/blog/2025/07/16/vibe-coding-from-ios-apps-to-an-asteroid-clock-and-3d-cities/</guid>

					<description><![CDATA[<p>Back in 2023, I wrote a blog post over on DigitalUrban.org creating an app called Frame-IT. It turns out I was using 'Vibe Coding' and it's changed the way I view the creation of almost everything...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2025/07/16/vibe-coding-from-ios-apps-to-an-asteroid-clock-and-3d-cities/">Vibe Coding: From iOS Apps to an Asteroid Clock and 3D Cities</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h4 class="wp-block-heading"><strong>Back in 2023, I wrote a blog post entitled Creating an app called Frame-IT. It turns out I was using &#8216;Vibe Coding&#8217; and it&#8217;s changed the way I view the creation of almost everything&#8230;</strong></h4>



<h2 class="wp-block-heading">Everything Changed</h2>



<p>The term ‘Vibe Coding’ was defined Andrej Karpathy, a co-founder at OpenAI in Early 2025. The basic idea is to rely entirely on Large Language Models &#8211; LLMs &#8211; and code by using only natural language prompting, rather than writing the code yourself.</p>



<p>Back in 2023, I published ‘Frame-IT’, an iOS app on the Apple Store, written in Swift but coded completely in the then-early version of ChatGPT. As I noted on <a href="https://www.digitalurban.org/">DigitalUrban</a>, every app has a story, from the idea through to story boarding, finding a design team, working with in house computer scientists or outsourcing. All of these steps take time, resources and often for the small ‘would be developer’ the obstacles are overwhelming, meaning that the spark of an idea often gets lost.</p>



<p>Yet, everything has changed, the whole landscape of developing, designing and creating has turned on its head. It has reached the point where the spark of an idea can be coded, designed, marketed and launched with the help of Artificial Intelligence. Frame-IT is a very simple app, doing a simple thing, but it came from an idea, a want, a desire to do something which before AI would have got lost in logistics, finding the time of computer scientists, and it would certainly be economically unfeasible.</p>



<h3 class="wp-block-heading">The First Idea</h3>



<p>Frame-IT came about while looking at Samsungs’ Frame Televisions, beyond the normal 4K television they blend into the background by displaying art and making the art look as it is mounted on a white background while also being enclosed in a frame – i.e. like a painting. Samsung does this well and the screens are excellent, but it is limited to art works, leading to the idea that it would be nice to frame websites, specifically ‘data’ based web sites and hang data on walls, data that changes in Realtime. Of course, any screen could be used, but if you simply show data on a screen or monitor, no one notices. If you hang it in a frame, as if its art, people will notice.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/07/51d58c8b-8cc2-49af-a1cb-46048fa08d49_730x584.webp" alt="" /></figure>



<p>The concept was therefore simple, build an application that puts a picture frame around websites, allow users to select which websites to show and if there is more than one, cycle through them, like a dashboard. This would allow small screens, such as iPads, to act as frames and, via AirPlay, devices such as projectors or general screens to gain a look beyond their norm and display websites and data in the same way as hanging art on the wall.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/07/170669e6-abd6-4279-9228-a053860fff00_1536x1009.png" alt="" /></figure>



<p>The problem was, this was an idea, to code it would require knowledge of Swift, a library of ‘picture frames’ and probably a business case to justify the time of resident computer scientists (I work at University College London, so we have a few around) and to be honest it was such a simple idea I’m note sure anyone would listen. So, with the rise of AI, I decided to build it myself, using AI from start to finish, from knowing little about Swift, and only as much about AI as my then twitter feed was full of and reading articles on site such as The Verge.</p>



<p>The app took me two days to build, two days where I was also working, so mainly doing things in-between other tasks, over lunch and a little bit in the evenings. With the initial learning curve out the way, I could rebuild it in a day. The first things I did was sign up to OpenAI to gain access to GPT 4 , which released on March 14th 2023. This allowed me access to the latest version and therefore the most up to date knowledge base, although looking back version 3 would probably have been fine and I could have simply used the free tier. My first prompt was “Can you write me swift code to take a website, centre it and add the image of a picture frame around it’. The app should work full screen, in landscape mode”. I had a sample picture frame (taken from Wikipedia and cutting out the actual image to leave a transparent centre).</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/07/764c8b30-9da0-4793-ae7a-be228d42eb10_1000x1000.jpeg" alt="" /></figure>



<p>Within 60 seconds, GPT gave me a section of code and then stopped, it turns out there was a limit in the amount of characters it could respond with. A quick Google search (ironically) showed me that by typing ‘continue’ ChatGPT will continue the code. With this I became a master of cutting and pasting, with the code often taking three continues and code which while sometimes was in code boxes, was also in simply text. It also allowed me to start to understand the layout and nature of Swift, within 30 minutes i had my first app running on an iPad, via Xcode. Simply by cutting and pasting and pressing ‘play’. Not all things worked out, there were often errors, but errors I could cut and paste back into GPT and it would solve them, most of the time.</p>



<p>Each time a milestone was made – a working version, a version with a picture frame in and a website showing, I would save the code as Chat GPT would sometimes break the next version while I was asking for new features. Features such as ‘Add a button to move to the next website’, ‘If there is more than one website, then cycle through them every 60 seconds’ (the websites were hard-coded at this point). Once i had a concept running on my own iPad, I realised it was quite neat and decided to then adapt it so others maybe able to use it. I added a settings page – via ‘Can you delete all the websites I have added and include a settings page, the settings page should be reached via a button and have the ability to add and delete websites, once added they should be saved so the app remembers them when reopened’. This is coding but in the new language – human language.</p>



<p>The picture frames were created in Image Creator by Bing, a search engine I never thought I would use (being a Mac user) but in that last week I had not been near Google. Bing allowed me to type in prompt and get images without any user limits. Simple prompts such as “create me a gold Victorian picture frame, it should be photorealistic with minimum reflections and the centre should be cut out” worked amazingly well, providing me with an almost limitless range of Picture frames to include as part of the assets.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/07/6635903d-3268-45db-b3ce-13e2ed6eb961_2048x1345-1.png" alt="" /></figure>



<p>Not all the picture frames are from AI, a couple are from open source imagery, it felt wrong to cut out the picture, but the frames have some provenance and they were nice to include.</p>



<p>Marketing requires imagery as well – as such I took a very simple picture of my two test iPads and used the ‘PhotoRoom” AI app to transform it to two iPads on a bench with a concrete wall behind, whereas the reality was far from as glamourous.</p>



<p>The <a href="https://apps.apple.com/us/app/frame-it/id6447362214">App is now available via the Apple Store</a>, it even <a href="https://frame-it.framer.website/">has its own website</a> and logo – again designed by AI, designed using <a href="https://looka.com/">Looka</a> and hosted on <a href="https://framer.com/">Framer</a>. So while it worked, it was a little painful with the limited lengths of code in the early version of ChatGPT and the endless cutting and pasting/fixing things &#8211; it was quick, but it still took a notable commitment and time. It is also a very very niche app, but as we get to, thats the point of Vibe Coding.</p>



<h3 class="wp-block-heading">And then Everything Changed Again</h3>



<p>That was 2023 &#8211; late 2024, and everything has changed again, not only the workflow and the tools, but also the ability of the LLMs to Vibe Code, they now work, no more cutting and pasting small chunks of code, everything works. Code comes out ready to go and any errors can be fixed in a complete code base, with notably less errors compared to 2024 . As such, we made another app &#8211; again Vibe Coding, without knowing at the time &#8211; MQTTFrame.</p>



<p>The aim was to build on the concept of FrameIT but allow realtime data intergration directly in the app via MQTT (a lightweight, publish-subscribe network protocol for data) and to make it easy to subscribe to any MQTT topic and display live updates.</p>

<div class="pullquote">
<p>It is a niche thing to want to do, but that’s what Vibe Coding is great for &#8211; building those apps that you want, but perhaps few others do.</p>
</div>

<p>Our aim was to allow:</p>



<ul class="wp-block-list">
<li>
<p><strong>Customizable Frames</strong>: Choose from multiple high-quality decorative frames to complement your home or office décor.</p>
</li>



<li>
<p><strong>Clean and Clear Data Display</strong>: Showcase important data such as temperature, pressure, news, and more with clear typography and layout.</p>
</li>



<li>
<p><strong>Fullscreen Mode</strong>: Tap to seamlessly switch to an immersive fullscreen view, maximizing readability and aesthetics.</p>
</li>



<li>
<p><strong>Easy Management</strong>: Add, edit, and reorder topics quickly through an intuitive interface.</p>
</li>



<li>
<p><strong>Instant Updates</strong>: Receive real-time data updates over MQTT, ensuring you’re always in the know.</p>
</li>
</ul>



<p>And that&#8217;s what we did &#8211; but this time in a day and with everything created in ChatGPT &#8211; icons, graphics, code, marking text, and even walking me back through how to get it on the Apple Store (as it&#8217;s an oddly complex and easily forgotten process).</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/07/d4b1fad6-69b7-427f-8106-0d6459dca027_2014x1348.png" alt="" /></figure>



<p>So one day Vibe Coding and an app was taken from concept to ready to publish on the Apple Store &#8211; if you want to try it its currently <a href="https://apps.apple.com/gb/app/mqtt-frame/id6743003699">available for free, on the Apple Store</a></p>



<h3 class="wp-block-heading">Everything Again and Again</h3>



<p>That was the end of 2024, now in mid 2025 and term Vibe Coding has come into common use and Large Language Models are now more than capable of making anything from webpages to apps to writing posts (all of this was, however, written by an actual human; any grammatical errors are of course, intentional). We have now switched from ChatGPT to Google Gemini and use it almost daily to produce and make apps or webpages that perhaps only I want in my life, such as &#8211; an Asteroids Clock, made in 30 minutes (temporarily housed at &#8211; <a href="https://finchamweather.co.uk/astroidsclock.html">https://finchamweather.co.uk/astroidsclock.html</a> &#8211; we will move it to Github soon as we get chance).</p>

<div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;58d654b0-ee21-4fb6-ab79-a58beb898566&quot;,&quot;duration&quot;:null}"> </div>

<p>or a Procedural City Maker (<a href="https://finchamweather.co.uk/proceduralcity.html">https://finchamweather.co.uk/proceduralcity.html)</a></p>

<div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;8abdc2e7-b243-40d0-8713-c62e50d0623b&quot;,&quot;duration&quot;:null}"> </div>

<p>or a Weather Dashboard with the forecast made into the image of an English Landscape using data from the API from the UK Met Office:</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://www.digitalurban.org/wp-content/uploads/2025/07/43df66a8-a90e-4fe8-8edd-279c8d09aaaa_2908x1538.png" alt="" /></figure>



<p>You can view the dashboard live at <a href="https://finchamweather.co.uk/aiweatherimage.html">https://finchamweather.co.uk/aiweatherimage.html</a></p>



<p>All small, niche things that somehow I love and I am happy that Vibe Coding allowed them to exist &#8211; that&#8217;s the magic, Vibe Coding allows you to create anything in your imagination that did not exist to exist, it’s a great time to be making digital things.</p>



<p>If you liked this post, please do subscribe, it gives us a good reason to write another one…</p>


<hr class="wp-block-separator has-css-opacity" />
<div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://digitalurban.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM">
<div class="subscription-widget show-subscribe">
<div class="preamble">
<p class="cta-caption">Thanks for reading Digitalurban’s Substack! Subscribe for free to receive new posts and support my work.</p>
</div>
<form class="subscription-widget-subscribe"><input class="email-input" tabindex="-1" name="email" type="email" placeholder="Type your email…" /><input class="button primary" type="submit" value="Subscribe" />
<div class="fake-input-wrapper">
<div class="fake-input"> </div>
<div class="fake-button"> </div>
</div>
</form></div>
</div><p>The post <a href="https://www.digitalurban.org/blog/2025/07/16/vibe-coding-from-ios-apps-to-an-asteroid-clock-and-3d-cities/">Vibe Coding: From iOS Apps to an Asteroid Clock and 3D Cities</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Particles &#8211;  3dsMax and Lumion/Unity</title>
		<link>https://www.digitalurban.org/blog/2014/09/01/particles-3dmax-and-lumionunity/</link>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Mon, 01 Sep 2014 10:43:24 +0000</pubDate>
				<category><![CDATA[3D Max]]></category>
		<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[3dsmax]]></category>
		<category><![CDATA[lumion]]></category>
		<category><![CDATA[lumion3d]]></category>
		<category><![CDATA[particles]]></category>
		<category><![CDATA[Unity]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3683</guid>

					<description><![CDATA[<p>Particle Flow is a versatile, powerful particle system for Autodesk&#8217;s 3ds Max. It employs an event-driven model, using a special dialog called Particle View, allowing you to combine individual operators that describe particle properties such...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2014/09/01/particles-3dmax-and-lumionunity/">Particles &#8211;  3dsMax and Lumion/Unity</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="blurb" style="color: #000000;">Particle Flow is a versatile, powerful particle system for Autodesk&#8217;s <span class="charspan-msgph">3ds Max</span>. It employs an event-driven model, using a special dialog called <span class="char_link">Particle View, allowing</span> you to combine individual <span class="char_link">operators</span> that describe particle properties such as shape, speed, direction, and rotation over a period of time into groups called <span class="char_link">events</span>. Each operator provides a set of parameters, many of which you can animate to change particle behaviour during the event. As the event transpires, Particle Flow continually evaluates each operator in the list and updates the particle system accordingly.</p>
<div id="attachment_3685" style="width: 600px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" aria-describedby="caption-attachment-3685" class=" wp-image-3685" src="https://www.digitalurban.org/wp-content/uploads/2014/09/Screen3DMaxParticles1-1-1024x560.jpg" alt="pFlow 3ds Max" width="590" height="322" srcset="https://www.digitalurban.org/wp-content/uploads/2014/09/Screen3DMaxParticles1-1-1024x560.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/09/Screen3DMaxParticles1-1-300x164.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/09/Screen3DMaxParticles1-1-768x420.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/09/Screen3DMaxParticles1-1-1536x841.jpg 1536w, https://www.digitalurban.org/wp-content/uploads/2014/09/Screen3DMaxParticles1-1-2048x1121.jpg 2048w" sizes="(max-width: 590px) 100vw, 590px" /><p id="caption-attachment-3685" class="wp-caption-text">pFlow 3ds Max</p></div>
<p style="color: #000000;">To achieve more substantial changes in particle properties and behaviour, you can create a <span class="char_link">flow</span>. The flow sends particles from event to event using <span class="char_link">tests</span>, which let you <span class="char_link">wire</span> events together in series. A test can check, for example, whether a particle has passed a certain age, how fast it&#8217;s moving, or whether it has collided with a deflector. Particles that pass the test move on to the next event, while those that don&#8217;t meet the test criteria remain in the current event, possibly to undergo other tests. The simple example pictured above details a pFlow dialogue determining the birth of particles linked to a target geometry. The particles can subsequently be baked (using<a href="http://www.oferz.com/maxscripts.php"> pFlow Baker</a>) into an animation timeline for simple output via .fbx, allowing import into external systems such as Unity or Lumion.</p>
<p style="color: #000000;">
<p><center><iframe width="640" height="360" src="//www.youtube.com/embed/Ex4SyeQMpFU" frameborder="0" allowfullscreen></iframe></center></p>
<p style="color: #000000;">The clip above illustrates the pFlow system imported into Lumion with the addition of a scene created in CityEngine.</p>
<p>The post <a href="https://www.digitalurban.org/blog/2014/09/01/particles-3dmax-and-lumionunity/">Particles &#8211;  3dsMax and Lumion/Unity</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Making of the CASA Oculus Rift Urban Roller Coaster</title>
		<link>https://www.digitalurban.org/blog/2014/08/26/the-making-of-the-casa-oculus-rift-urban-roller-coaster/</link>
					<comments>https://www.digitalurban.org/blog/2014/08/26/the-making-of-the-casa-oculus-rift-urban-roller-coaster/#comments</comments>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Tue, 26 Aug 2014 14:48:54 +0000</pubDate>
				<category><![CDATA[3D Max]]></category>
		<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[Architectural Visualisation]]></category>
		<category><![CDATA[SketchUp]]></category>
		<category><![CDATA[3dmax]]></category>
		<category><![CDATA[oculus rift]]></category>
		<category><![CDATA[Unity]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3657</guid>

					<description><![CDATA[<p>Earlier this year CASA was invited to create a virtual reality exhibit for the Walking on Water exhibition, partnered with Grand Designs Live at London’s ExCeL. While CASA has a...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2014/08/26/the-making-of-the-casa-oculus-rift-urban-roller-coaster/">The Making of the CASA Oculus Rift Urban Roller Coaster</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Earlier this year <a href="http://www.casa.ucl.ac.uk">CASA</a> was invited to create a virtual reality exhibit for the Walking on Water exhibition, partnered with Grand Designs Live at London’s ExCeL. While CASA has a tendency to spend a lot of time thinking seriously about cities and data it was quickly decided that a fun and novel way to engage the 100,000 or so expected visitors would be an urban roller coaster ride using the Oculus Rift Virtual Reality headset. <a href="http://virtualarchitectures.wordpress.com/">Oliver Dawkins</a>, a student on our <a href="http://www.bartlett.ucl.ac.uk/casa/programmes/postgraduate">MRes in Advanced Spatial Analysis</a> is a leading light in urban visualisation and the Oculus Rift, as such he kindly offered to lead the development. In the following guest post, Oliver (of <a href="http://virtualarchitectures.wordpress.com/">http://virtualarchitectures.wordpress.com/</a>) talks us through the development process&#8230;.<br />
<center><iframe src="//player.vimeo.com/video/104185315?title=0" width="600" height="337" frameborder="0" allowfullscreen="allowfullscreen"></iframe><a href="http://vimeo.com/104185315">CASA Urban Roller Coaster</a> from <a href="http://vimeo.com/virtualarchitectures">Virtual Architectures</a> on <a href="https://vimeo.com">Vimeo</a>.</center>The first tool chosen for this project was the Unity game engine because it provides a very simple means of integrating the Oculus Rift virtual reality headset into a real-time 3D experience. Initial tests were made in Unity with a pre-made roller coaster model downloaded from the Unity Asset Store. However, rather than simply place that roller coaster in an urban setting I wanted to create a track that would be unique to this experience and feel like it might have been part of the urban infrastructure. Due to time constraints it was not possible to model the urban scene from scratch. Instead I decided to generate it procedurally in Autodesk 3ds MAX using a great free script called ghostTown Lite.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3658" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_01-1-1024x528.jpg" alt="" width="654" height="295" /><br />
Although I like to use SketchUp for 3D modelling wherever possible 3ds MAX was much better suited to this project as it allowed me to quickly generate the city scene, model the roller coaster track, and animate the path of the ride, all in the one software package. After generating the urban scene I used the car from the Asset Store roller coaster as a guide for modelling my track in the correct proportions.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3659" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_02-1-1024x528.jpg" alt="making_of_casa_roller_coaster_02" width="687" height="354" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_02-1-1024x528.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_02-1-300x155.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_02-1-768x396.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_02-1.jpg 1366w" sizes="auto, (max-width: 687px) 100vw, 687px" /><br />
The path of the ride through the city was modeled using Bezier splines, first in the Top view to get the rough layout and then in the Front and Left views to ensure the path would clear the buildings in my scene. The experience needed to be comfortable to users who may not have experienced virtual reality before so it was agreed to exclude loop-the-loops on this occasion. It was also important to avoid bends that would be too sharp for roller coaster to realistically follow. Once I was happy with the path I welded all the vertices in my splines so that the path could be used to animate the movement of the roller coaster car along the track later.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3660" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_03-1-1024x528.jpg" alt="making_of_casa_roller_coaster_03" width="700" height="361" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_03-1-1024x528.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_03-1-300x155.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_03-1-768x396.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_03-1.jpg 1366w" sizes="auto, (max-width: 700px) 100vw, 700px" /><br />
Next sections of track were added to the path I’d created using the 3ds MAX PathDeform (WSM) modifier. As the name suggests this modifier deforms selected geometry to follow a chosen path. Using this modifier massively simplified the process by allowing my pre-made sections of track to be offset along the length of the path and then stretched, rotated and twisted to fit together as seamlessly as possible. This was the most intricate and time consuming part of the project.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3661" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_04-1-1024x528.jpg" alt="making_of_casa_roller_coaster_04" width="708" height="365" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_04-1-1024x528.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_04-1-300x155.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_04-1-768x396.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_04-1.jpg 1366w" sizes="auto, (max-width: 708px) 100vw, 708px" /><br />
In order to minimise the the potential for motion sickness with the Oculus Rift I was careful to keep the rotation of the track as close to the horizontal plane as possible. Supporting struts were then arrayed along the path of the track and positioned in order to anchor it to the rest of the scene. When I was satisfied a ‘Snapshot’ was made of the geometry in 3ds MAX to create a single mesh ready for export to Unity. At this point the path deformed sections of track could be deleted as Unity does not recognise the modifier.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3662" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_05-1-1024x528.jpg" alt="making_of_casa_roller_coaster_05" width="747" height="385" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_05-1-1024x528.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_05-1-300x155.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_05-1-768x396.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_05-1.jpg 1366w" sizes="auto, (max-width: 747px) 100vw, 747px" /><br />
To create the movement of the roller coaster car along the track a 3ds MAX dummy helper was constrained to the path I’d created earlier. This generated starting and ending key frames on the animation timeline. The roller coaster car model was then placed on the track and linked to the dummy helper. It is possible in 3ds Max to have the velocity and banking of the dummy calculated automatically, but I found that this did not give a realistic feel. Instead I controlled both by editing the animation key frames using a camera linked to the dummy for reference. This was time intensive but gave a better result. The city scene, roller coaster and animation were exported as a single FBX which is the preferred import format for 3D geometry in Unity.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3663" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_06-1-1024x528.jpg" alt="making_of_casa_roller_coaster_06" width="654" height="337" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_06-1-1024x528.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_06-1-300x155.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_06-1-768x396.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_06-1.jpg 1366w" sizes="auto, (max-width: 654px) 100vw, 654px" /><br />
Having completed the track and animated the car it was time to assemble the final scene in Unity. First I generated a terrain using a great plugin called World Composer. This enables you to import satellite imagery and terrain heights from Bing maps to give your backdrops a high degree of realism.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3664" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_07-1-1024x526.jpg" alt="making_of_casa_roller_coaster_07" width="769" height="395" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_07-1-1024x526.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_07-1-300x154.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_07-1-768x395.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_07-1.jpg 1366w" sizes="auto, (max-width: 769px) 100vw, 769px" /><br />
The urban scene and roller coaster were then imported and a skybox and directional light were added. The scene was completed with various assets from the Unity Asset Store including skyscrapers, roof objects, vehicles, idling characters and a flock of birds.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3666" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_09-1-1024x526.jpg" alt="making_of_casa_roller_coaster_09" width="787" height="404" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_09-1-1024x526.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_09-1-300x154.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_09-1-768x395.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_09-1.jpg 1366w" sizes="auto, (max-width: 787px) 100vw, 787px" /><br />
To prepare the Oculus Rift integration the OVR camera controller asset from Oculus was placed inside and parented to the roller coaster car. In my initial tests with the Asset Store roller coaster I’d found that OVR camera would drift from the forward facing position. This would disorientate the user and contribute to motion sickness. To prevent it with a quick fix I parented a cube to the front of the roller coaster car, turned off rendering of the cube so it would be invisible, and set the camera controller to follow the cube.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3667" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_10-1-1024x526.jpg" alt="making_of_casa_roller_coaster_10" width="801" height="411" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_10-1-1024x526.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_10-1-300x154.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_10-1-768x395.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_10-1.jpg 1366w" sizes="auto, (max-width: 801px) 100vw, 801px" /><br />
In order to ensure the best possible virtual experience it is really important to keep the rendered frames per second as high as possible. As the Oculus Rift renders two cameras simultaneously, one for each eye, you need to aim to render 60 fps in Unity so as to ensure the user can expect to experience a frame rate of 30 fps.<br />
In order to achieve this I took advantage of occlusion culling in Unity Pro which prevents objects being rendered when they are outside the camera’s field of view or obscured by other objects.<br />
<img loading="lazy" decoding="async" class="aligncenter  wp-image-3668" src="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_11-1-1024x526.jpg" alt="making_of_casa_roller_coaster_11" width="777" height="399" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_11-1-1024x526.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_11-1-300x154.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_11-1-768x395.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2014/08/making_of_casa_roller_coaster_11-1.jpg 1366w" sizes="auto, (max-width: 777px) 100vw, 777px" /><br />
I also baked the shadows for all static objects in the scene to save them as textures which saves the processor calculating them dynamically. The only objects casting dynamic shadows are the roller coaster car and animated characters.<br />
Finally two simple java scripts were added. The first would start the roller coaster and <img loading="lazy" decoding="async" class="alignright wp-image-3669 size-full" src="https://www.digitalurban.org/wp-content/uploads/2014/08/uclprovost3-1.jpg" alt="uclprovost3" width="460" height="340" srcset="https://www.digitalurban.org/wp-content/uploads/2014/08/uclprovost3-1.jpg 460w, https://www.digitalurban.org/wp-content/uploads/2014/08/uclprovost3-1-300x222.jpg 300w" sizes="auto, (max-width: 460px) 100vw, 460px" />play a roller coaster sound file upon pressing the ‘S’ key. The second closed the roller coaster application upon pressing the ‘Esc’ key.<br />
The reception of the CASA Urban Roller Coaster ride at Grand Designs Live was fantastic and I’m really pleased to have participated. It was a great project to work on and an excellent opportunity to learn new techniques in 3ds MAX and Unity. Having my first VR roller coaster under my belt I’m looking forward to building another truly terrifying one when I get the time, hopefully for the Oculus Rift DK2 which has just arrived at CASA.<br />
On a last note I’d like to thank Tom Hoffman of Lake Earie Digital whose excellent YouTube tutorials on creating roller coasters in 3ds MAX provided a great guide through the most difficult part of this challenging project.<br />
You can follow Oliver&#8217;s latest work at  <a href="http://virtualarchitectures.wordpress.com/">http://virtualarchitectures.wordpress.com/</a>&#8230;..</p>
<p>The post <a href="https://www.digitalurban.org/blog/2014/08/26/the-making-of-the-casa-oculus-rift-urban-roller-coaster/">The Making of the CASA Oculus Rift Urban Roller Coaster</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.digitalurban.org/blog/2014/08/26/the-making-of-the-casa-oculus-rift-urban-roller-coaster/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>3D Printed Mechanical Clock</title>
		<link>https://www.digitalurban.org/blog/2013/09/14/3d-printed-clock/</link>
					<comments>https://www.digitalurban.org/blog/2013/09/14/3d-printed-clock/#comments</comments>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Sat, 14 Sep 2013 13:26:50 +0000</pubDate>
				<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[3D Printing]]></category>
		<category><![CDATA[Printed Clock]]></category>
		<category><![CDATA[Replicator 2]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3488</guid>

					<description><![CDATA[<p>The rise in 3D printers and the move towards semi-consumer level models, such as MakerBot Replicator 2, opens up a wealth of opportunity to build everyday items. With a Replicator...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/09/14/3d-printed-clock/">3D Printed Mechanical Clock</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The rise in 3D printers and the move towards semi-consumer level models, such as <a href="http://store.makerbot.com/replicator2.html">MakerBot Replicator 2</a>, opens up a wealth of opportunity to build everyday items. With a Replicator 2 in the corner of the office here at <a href="http://www.casa.ucl.ac.uk">CASA</a>, University College London, we thought we would try to print a weight powered 3D clock. There are a number of sites online that provide plans or kits for wooden clocks, often aimed at CNC type machines or simply scroll saw cutting out of the individual cogs. A key site is <a title="Wooden Clock Plans and Kits" href="http://www.woodentimes.com">woodentimes.com</a>, the clock we have printed is a modified version of the <a href="http://www.woodentimes.com/septimus.html">Septimus</a>.<br />
<div id="attachment_3494" style="width: 689px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3494" class=" wp-image-3494" title="Replicator 2 Printing Cogs" alt="Replicator 2 Printing Cogs" src="https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0078-1-1024x768.jpg" width="679" height="509" srcset="https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0078-1-1024x768.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0078-1-300x225.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0078-1-768x576.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0078-1-1536x1152.jpg 1536w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0078-1-2048x1536.jpg 2048w" sizes="auto, (max-width: 679px) 100vw, 679px" /><p id="caption-attachment-3494" class="wp-caption-text">Replicator 2 Printing Cogs</p></div><br />
The parts were created in the free version of SketchUp, via a DXF plan and exported to .stl for import into MakerWare. 3D printing is still a hit and miss affair, we printed each part out individually to minimise the risk of any printing errors on the replicator.<br />
<div id="attachment_3495" style="width: 708px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3495" class=" wp-image-3495  " alt="3D Printed Parts" src="https://www.digitalurban.org/wp-content/uploads/2013/09/Printed-Parts-1-1024x671.jpg" width="698" height="457" /><p id="caption-attachment-3495" class="wp-caption-text">3D Printed Parts</p></div><br />
In general, printing in the centre of the replicator reduces any errors, we also added a raft to each cog and printed at 100% to increase the strength of the final print. Each cog took approximately 2 hours to print with the frame sections 3 to 4 hours.<br />
<div id="attachment_3490" style="width: 631px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3490" class=" wp-image-3490  " alt="3D Printed Clock" src="https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0086-1-1024x768.jpg" width="621" height="466" srcset="https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0086-1-1024x768.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0086-1-300x225.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0086-1-768x576.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0086-1-1536x1152.jpg 1536w, https://www.digitalurban.org/wp-content/uploads/2013/09/IMG_0086-1-2048x1536.jpg 2048w" sizes="auto, (max-width: 621px) 100vw, 621px" /><p id="caption-attachment-3490" class="wp-caption-text">3D Printed Clock</p></div><br />
The complete clock took 4 days to print, it runs on an 600g weight and requires winding every 48 hours &#8211; the clip below details the completed 3D printed clock:<br />
<center><iframe loading="lazy" src="//www.youtube.com/embed/1d5pbnsX14c" height="480" width="640" allowfullscreen="" frameborder="0"></iframe></center> 3D printing opens up any number of possibilities, at the moment it is still slightly experimental and creating the clock was a process of trial of error, especially in terms of the 3D printer settings. The ability to load up SketchUp, model an item and have a 3D printed version in a few hours still fills me with wonder though&#8230;.</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/09/14/3d-printed-clock/">3D Printed Mechanical Clock</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.digitalurban.org/blog/2013/09/14/3d-printed-clock/feed/</wfw:commentRss>
			<slash:comments>17</slash:comments>
		
		
			</item>
		<item>
		<title>UCL Live Campus Augmented Reality App &#8211; Created by Masters Students at CASA</title>
		<link>https://www.digitalurban.org/blog/2013/06/20/ucl-live-augmented-reality-app-created-by-masters-students-at-casa/</link>
					<comments>https://www.digitalurban.org/blog/2013/06/20/ucl-live-augmented-reality-app-created-by-masters-students-at-casa/#comments</comments>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Thu, 20 Jun 2013 08:23:03 +0000</pubDate>
				<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[Agent Based Modelling]]></category>
		<category><![CDATA[android]]></category>
		<category><![CDATA[Augmented Reality]]></category>
		<category><![CDATA[Game Engines]]></category>
		<category><![CDATA[SketchUp]]></category>
		<category><![CDATA[Unity]]></category>
		<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[CASA]]></category>
		<category><![CDATA[GIS]]></category>
		<category><![CDATA[MRes]]></category>
		<category><![CDATA[UCL Map]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3437</guid>

					<description><![CDATA[<p>UCLive is an Augmented Reality Map of UCL developed by students on the Masters in Advanced Spatial Analysis and Visualisation at CASA. Featuring live data, the augmented reality android app works by simply pointing...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/06/20/ucl-live-augmented-reality-app-created-by-masters-students-at-casa/">UCL Live Campus Augmented Reality App &#8211; Created by Masters Students at CASA</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>UCLive is an Augmented Reality Map of UCL developed by students on the <a title="CASA Masters Course" href="http://www.bartlett.ucl.ac.uk/casa/programmes/postgraduate/mres-advanced-spatial-analysis-visualisation">Masters in Advanced Spatial Analysis and Visualisation</a> at CASA. Featuring live data, the augmented reality android app works by simply pointing your mobile device at any of the UCL maps across campus.<br />
Running in Unity and mixing a number of GIS and Agent Based Modelling elements, the development is outlined below:</p>
<ol>
<li>A three dimensional base model was created with SketchUp on top of the UCL campus map;</li>
<li>Moving agents and transports are added into the application to show local bus and tube routes;</li>
<li>Sounds are embedded into the application from the <a href="https://soundcloud.com/uclsound">UCL Soundcloud</a>;</li>
<li>Online data on buildings, Twitter and PC Cluster availability  &#8211; live;</li>
<li>The user’s location is shown in the application map.</li>
</ol>
<div id="attachment_3438" style="width: 689px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3438" class=" wp-image-3438 " alt="Augmented Reality UCL Map" src="https://www.digitalurban.org/wp-content/uploads/2013/06/Screen-Shot-2013-06-20-at-08.19.51-1-1024x535.png" width="679" height="354" /><p id="caption-attachment-3438" class="wp-caption-text">Augmented Reality UCL Map</p></div><br />
Produced by  Ayana Kito ( <a dir="ltr" title="http://3rdplat.blogspot.co.uk" href="http://3rdplat.blogspot.co.uk/" target="_blank" rel="nofollow noopener">http://3rdplat.blogspot.co.uk</a> ), Balamurgan Soundararaj ( <a dir="ltr" title="http://geoidin.wordpress.com" href="http://geoidin.wordpress.com/" target="_blank" rel="nofollow noopener">http://geoidin.wordpress.com</a> ), Daniel Lam ( <a dir="ltr" title="http://spatiametrics.wordpress.com" href="http://spatiametrics.wordpress.com/" target="_blank" rel="nofollow noopener">http://spatiametrics.wordpress.com</a> ) and Nicola Clark (<a dir="ltr" title="http://cityofblindinglights.org" href="http://cityofblindinglights.org/" target="_blank" rel="nofollow noopener">http://cityofblindinglights.org</a> ) it represents the output of the final visualisation project on the masters course. The movie illustrates the app in action:<br />
<center><iframe loading="lazy" src="http://www.youtube.com/embed/1PKKn1UV2go" height="360" width="640" allowfullscreen="" frameborder="0"></iframe></center>Take a look at the main <a title="UCLive" href="http://ucliveproject.wordpress.com/2013/06/18/uclive-a-live-map-for-a-living-campus/">UCLive page for full details</a> and updates, we are currently exploring next steps to move the app towards release on Android and iOS.<br />
You can still apply for <a title="Join the CASA Masters Programme" href="http://www.bartlett.ucl.ac.uk/casa/news/2013-05-28-MRes-Places">2013/14 entry to the MRes in Advanced Spatial Analysis and Visualisation</a> the final full time deadline for applications: August 2nd.</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/06/20/ucl-live-augmented-reality-app-created-by-masters-students-at-casa/">UCL Live Campus Augmented Reality App &#8211; Created by Masters Students at CASA</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.digitalurban.org/blog/2013/06/20/ucl-live-augmented-reality-app-created-by-masters-students-at-casa/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>UCL Quad &#8211; Procedural City and Lumion</title>
		<link>https://www.digitalurban.org/blog/2013/06/12/ucl-quad-procedural-city-and-lumion/</link>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Wed, 12 Jun 2013 09:24:48 +0000</pubDate>
				<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[Lumion]]></category>
		<category><![CDATA[UCL]]></category>
		<category><![CDATA[cityengine]]></category>
		<category><![CDATA[lumion]]></category>
		<category><![CDATA[SketchUp]]></category>
		<category><![CDATA[UCL Quad]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3429</guid>

					<description><![CDATA[<p>The Quadrangle at University College London was designed by William Wilkins and constructed between 1827 and 1828. It is a natural building of urban research as its surroundings provide a mix of architectural styles....</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/06/12/ucl-quad-procedural-city-and-lumion/">UCL Quad &#8211; Procedural City and Lumion</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The Quadrangle at University College London was designed by William Wilkins and constructed between 1827 and 1828. It is a natural building of urban research as its surroundings provide a mix of architectural styles. The 3D model of the quad was built using SketchUp with photos grabbed via a mobile phone &#8211; quick and simple.<br />
<img loading="lazy" decoding="async" class=" wp-image-3430  aligncenter" style="text-align: center; font-size: 13px; line-height: 19px;" title="UCL Quad - Lumion" alt="UCL Quad - Lumion" src="https://www.digitalurban.org/wp-content/uploads/2013/06/Screen-Shot-2013-06-12-at-09.49.28-1-1024x539.png" width="679" height="357" /></p>
<dl class="wp-caption aligncenter" id="attachment_3430" style="width: 689px;">
<dd class="wp-caption-dd">UCL Quad &#8211; Lumion</dd>
</dl>
<p>Regular readers will recognise the model from our previous <a href="http://www.digitalurban.org/2008/11/back-to-oblivion-final-import-movie.html">Elder Scrolls IV Game Engine work</a>. We found the model on an old hard drive for a group of students working on a new UCL project. As such we took the opportunity to  load it into Lumion and add it into a future Bloomsbury created using CityEngine. Finally, the falling cubes (<a href="http://www.casa.ucl.ac.uk">CASA </a>branded, home of digital urban) are via 3D Max, linked in from a <a href="http://www.digitalurban.org/2012/08/getting-started-with-simulations-in-3d-max-greeble-and-massfx.html">previous tutorial on Mass FX</a>:</p>
<div></div>
<p><center><iframe loading="lazy" src="http://www.youtube.com/embed/syCQSi-UuMg" height="360" width="640" allowfullscreen="" frameborder="0"></iframe></center>The building is centered on a Corinthian portico, after the completion of University College London Wilkins went to design the National Gallery in Trafalgar Square.</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/06/12/ucl-quad-procedural-city-and-lumion/">UCL Quad &#8211; Procedural City and Lumion</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Police Agent City Ball &#8211; Unity/NavMesh and CityEngine</title>
		<link>https://www.digitalurban.org/blog/2013/05/01/police-agent-city-ball-unitynavmesh-and-cityengine/</link>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Wed, 01 May 2013 12:54:17 +0000</pubDate>
				<category><![CDATA[3D Agents]]></category>
		<category><![CDATA[3D City]]></category>
		<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[NavMesh]]></category>
		<category><![CDATA[Games Engines]]></category>
		<category><![CDATA[Path Finding]]></category>
		<category><![CDATA[Shortest Path]]></category>
		<category><![CDATA[Unity]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3351</guid>

					<description><![CDATA[<p>The natural conclusion of our recent posts on Unity and CityEngine is of course a physics/navmesh police chase with the agent target being a drag and drop ball. We will...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/05/01/police-agent-city-ball-unitynavmesh-and-cityengine/">Police Agent City Ball &#8211; Unity/NavMesh and CityEngine</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The natural conclusion of our recent posts on Unity and CityEngine is of course a physics/navmesh police chase with the agent target being a drag and drop ball.<br />
<div id="attachment_3352" style="width: 588px" class="wp-caption aligncenter"><a href="https://www.digitalurban.org/wp-content/uploads/2013/05/Screen-Shot-2013-05-01-at-13.44.41-1.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3352" class=" wp-image-3352" title="Police Car Agents in the City" alt="Screen Shot 2013-05-01 at 13.44.41" src="https://www.digitalurban.org/wp-content/uploads/2013/05/Screen-Shot-2013-05-01-at-13.44.41-1.png" width="578" height="288" srcset="https://www.digitalurban.org/wp-content/uploads/2013/05/Screen-Shot-2013-05-01-at-13.44.41-1.png 826w, https://www.digitalurban.org/wp-content/uploads/2013/05/Screen-Shot-2013-05-01-at-13.44.41-1-300x150.png 300w, https://www.digitalurban.org/wp-content/uploads/2013/05/Screen-Shot-2013-05-01-at-13.44.41-1-768x383.png 768w" sizes="auto, (max-width: 578px) 100vw, 578px" /></a><p id="caption-attachment-3352" class="wp-caption-text">Police Car Agents in the City</p></div><br />
We will release a version next week on the win/mac platforms for those interested &#8211; the aim was to explore shortest path modelling of agents in Unity within an urban scene:<br />
<center><iframe loading="lazy" src="http://www.youtube.com/embed/6yFevBsW2gs" height="360" width="640" allowfullscreen="" frameborder="0"></iframe></center>The concept was developed as part of our <a href="http://www.digitalurban.org/masters">Masters course here in CASA</a>, to explore the development of 3D interactive urban scenes&#8230;</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/05/01/police-agent-city-ball-unitynavmesh-and-cityengine/">Police Agent City Ball &#8211; Unity/NavMesh and CityEngine</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Augmented Reality City Table Top &#8211; Unity, CityEngine and Vufoira</title>
		<link>https://www.digitalurban.org/blog/2013/04/24/augmented-reality-city-table-top-unity-cityengine-and-vufoira/</link>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Wed, 24 Apr 2013 15:00:24 +0000</pubDate>
				<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[Architectural Visualisation]]></category>
		<category><![CDATA[augmented reality]]></category>
		<category><![CDATA[cityengine]]></category>
		<category><![CDATA[ESRI]]></category>
		<category><![CDATA[Unity]]></category>
		<category><![CDATA[Vuforia]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3333</guid>

					<description><![CDATA[<p>The Vuforia AR Extension for Unity allows developers to build AR apps using the cross-platform game engine &#8211; Unity. Working with both the free and pro versions of Unity, Vuforia is not...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/04/24/augmented-reality-city-table-top-unity-cityengine-and-vufoira/">Augmented Reality City Table Top &#8211; Unity, CityEngine and Vufoira</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>The <a href="https://developer.vuforia.com/">Vuforia AR Extension</a> for Unity allows developers to build AR apps using the cross-platform game engine &#8211; Unity. Working with both the free and pro versions of Unity, Vuforia is not only free it is also one of the best out there in terms of tracking and image based tagging.<br />
<div id="attachment_3335" style="width: 687px" class="wp-caption aligncenter"><a href="https://www.digitalurban.org/wp-content/uploads/2013/04/ARCity-1.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3335" class=" wp-image-3335 " alt="Table Top AR City" src="https://www.digitalurban.org/wp-content/uploads/2013/04/ARCity-1.png" width="677" height="393" srcset="https://www.digitalurban.org/wp-content/uploads/2013/04/ARCity-1.png 967w, https://www.digitalurban.org/wp-content/uploads/2013/04/ARCity-1-300x174.png 300w, https://www.digitalurban.org/wp-content/uploads/2013/04/ARCity-1-768x446.png 768w" sizes="auto, (max-width: 677px) 100vw, 677px" /></a><p id="caption-attachment-3335" class="wp-caption-text">Table Top AR City</p></div><br />
Running on either iOS/Android or direct via a webcam it allows the integration of other Unity assets into scenes. As such you can create a table top digital city complete with agents all linked to an image based marker and running as a native iOS app:<br />
<center><iframe loading="lazy" src="http://www.youtube.com/embed/IY07EWow1oE" height="360" width="640" allowfullscreen="" frameborder="0"></iframe></center><br />
We will be exploring this further over coming weeks, it opens up a lot of possibilities for augmenting city models (and office desks) with urban data visualisations via custom made iOS/Android apps&#8230;.</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/04/24/augmented-reality-city-table-top-unity-cityengine-and-vufoira/">Augmented Reality City Table Top &#8211; Unity, CityEngine and Vufoira</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Shortest Path Modelling and NavMesh in Unity and CityEngine</title>
		<link>https://www.digitalurban.org/blog/2013/04/18/shortest-path-modelling-and-navmesh-in-unity-and-cityengine-2/</link>
		
		<dc:creator><![CDATA[Andy]]></dc:creator>
		<pubDate>Thu, 18 Apr 2013 12:13:49 +0000</pubDate>
				<category><![CDATA[3D Modelling]]></category>
		<category><![CDATA[Agent Based Modelling]]></category>
		<category><![CDATA[cityengine]]></category>
		<category><![CDATA[Pedestrian Modelling]]></category>
		<category><![CDATA[Shortest Path]]></category>
		<category><![CDATA[Unity]]></category>
		<guid isPermaLink="false">http://www.digitalurban.org/?p=3323</guid>

					<description><![CDATA[<p>Shortest path network analysis, pedestrian modelling and moving agents around complex city scenes has always been a specialised domain. As regular readers will know we have always taken the view...</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/04/18/shortest-path-modelling-and-navmesh-in-unity-and-cityengine-2/">Shortest Path Modelling and NavMesh in Unity and CityEngine</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Shortest path network analysis, pedestrian modelling and moving agents around complex city scenes has always been a specialised domain. As regular readers will know we have always taken the view that game engines are arguably better suited to agent based modelling &#8211; especially in terms of pedestrian and transport than more traditional packages.<br />
<div id="attachment_3325" style="width: 689px" class="wp-caption aligncenter"><a href="https://www.digitalurban.org/wp-content/uploads/2013/04/AgentsCity1-1.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3325" class=" wp-image-3325 " alt="Agents in Unity" src="https://www.digitalurban.org/wp-content/uploads/2013/04/AgentsCity1-1-1024x476.jpg" width="679" height="315" srcset="https://www.digitalurban.org/wp-content/uploads/2013/04/AgentsCity1-1-1024x476.jpg 1024w, https://www.digitalurban.org/wp-content/uploads/2013/04/AgentsCity1-1-300x140.jpg 300w, https://www.digitalurban.org/wp-content/uploads/2013/04/AgentsCity1-1-768x357.jpg 768w, https://www.digitalurban.org/wp-content/uploads/2013/04/AgentsCity1-1-1536x714.jpg 1536w, https://www.digitalurban.org/wp-content/uploads/2013/04/AgentsCity1-1.jpg 1894w" sizes="auto, (max-width: 679px) 100vw, 679px" /></a><p id="caption-attachment-3325" class="wp-caption-text">Agents in Unity</p></div><br />
With a mix of Unity NavMesh and some simple Javascript it is possible to create a city wide navigation mesh. Below is out first example using agents to follow a target (viewable in 1080p):<br />
<center><iframe loading="lazy" width="640" height="360" src="http://www.youtube.com/embed/0icL51sf5q0" frameborder="0" allowfullscreen></iframe></center><br />
We will be exploring this further over coming weeks&#8230;</p>
<p>The post <a href="https://www.digitalurban.org/blog/2013/04/18/shortest-path-modelling-and-navmesh-in-unity-and-cityengine-2/">Shortest Path Modelling and NavMesh in Unity and CityEngine</a> appeared first on <a href="https://www.digitalurban.org">Digital Urban</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
