Solar Eclipse 2012.5.20

There was an eclipse in Calgary yesterday and it was cloudy. Whatever, weather. Just. Whatever.

So, here’s a picture from Tokyo of the same event; credit going to Kazuhiro Nogi who has a very, very diverse portfolio and no official listing of it. Odd.

We did, instead, as we’re apt to do, go and eat ridiculous amounts of very excellent food.

via

Civilization V and Augmented Reality

I don’t want to be a negative Nancy, and don’t misread my intention, the technology is all well and good and I daresay cool but there’s something that’s been bothering me since the ol’ Google Glasses thing a few weeks back.

The photo above is via this post by @Sidv who is a cool guy and I do want to give him mad props for the post itself – it is an idyllic future.

Buuut…

I’m not sure I want it.

Time to relate something seemingly unrelated: last weekend was Easter and I pilgrimaged to my hometown in search of chocolate and ham and had the pressing excitement because I’d finally get some time to play Civ 5, which I had bought on Steam sale over a month ago and was promptly pushed to the wayside in shadow of actually important things. This post isn’t really a review of said game, because lots of people like it and my point doesn’t really apply to those who do / those who know what they’re doing. For my n00b level there are these advisers that inform you of the things you should be doing and hold your hand throughout your civilization’s evolution. This is nice because I’d be entirely lost without them, but I can’t help but feel like the game isn’t really mine anymore. Like I’m just a lackey that pushes the buttons they tell me to push. It’s not really a game anymore as much as it’s a list of instructions. Like, they should just press the buttons for me and play out the whole simulation and then tell me at the end if I won or lost. It’d be like flipping a coin and getting excited for the outcome.

Sure, it’s my fault I don’t know enough to fly solo, but my metaphor stands: augmented reality is a double edged sword that could downplay the more intuitive parts of life. The parts, I’d argue, that are the most rewarding and enjoyable. The marathon runners in the photo above; is it really a race if you know exactly how much energy everyone has? You’re just reading bars at that point. The point of the race is to try and outplay the opponents. Are they saving their energy for a sprint near the end? Are they actually in a comfortable lead, or did they waste it all running off the line? It’d be like playing poker while seeing everyone’s cards or… flipping a coin and getting excited for the outcome.

Translated signage in other countries? Awesome. I love that. But I’d still play hard mode and do my best to learn the language, because that’s part of going to another country: learning culture that isn’t yours. I can stay home and read English signage anytime.

Maps? Cool. There have definitely been times where I’d have liked a map on my journeys, but I look back on some of my best memories and a surprising amount of them have occurred when I’ve been lost. It’s a triumph to find a landmark or something and figure it out. It’s a spacial puzzle. A big maze that you get to walk around in. Explore. I love exploring.

So I don’t want to be down on the technology, I am really excited for it. I’m just not sure I want the things in my life doing everything for me. It’s fun living on the edge, and it’s sad that walking through the streets without a map is considered “living on the edge”.

Camera as Joystick

I like how a simple idea and a subtle change can open up a whole new branch of of solutions.

Definitely not wanting to steal thunder or claim anything, but I did briefly wonder about that, just sliding your finger around over the camera – the problem with that is it’s too dark and hard to track, where the joystick’s dots provide that grounding. Also, if you’ve ever slid your finger over an optical mouse’s sensor, you know that it’s an awkward ergonomic motion with any precision, so the mechanism of the foam is advantageous there as well.

Apologies are in order for these sorts of half-posts, I’ve been super busy lately as the final weeks before graduation are coming up.

via

Technological Abstractions

This calculator app from Berger & Föhr has been making waves in the blogosphere and I wanted to mention it not because it’s pretty and novel – which, it is, and that’s all fine and good – but because of what it represents.

When computers started they were a complete abstraction, lines of text that did things inside this box. Later the then-fledgling Apple added a GUI to give a better interaction between human and that mystery backend. They used metaphors in both interactions and terminology. The “desktop” held “folders” with “files” in them – none of these things exist, but it’s a good way to communicate it to the users, especially when all of this was starting and people were initially confused.

Since then, we’ve come a long way. That was 1983, just a hair short of thirty years ago. Ten years before I was born. We’ve brought up an entire new generation of people who have grown up and just accept these things; it’s not really that hard of an abstraction anymore. So it’s cool (for me, as part of that new generation) to see these things being streamlined and refined past the typical, and by that I mean, clunky and old school.

You pick up a physical calculator and you have buttons for operations because a) that’s how it’s always been and b) that’s really all you can do. You can rearrange them, sure, or maybe change how they work, but ultimately they have to be there in some capacity. Enter touchscreens. Not really new either, we’ve had iPhones for five years now, yet the calculator apps have always included the operator buttons as a direct analogue for the physical kind. They just remade it directly. Easy to understand? Sure. Familiar? Yeah. Efficient? Not really, no.

Again, and I said it a mere paragraph up, that’s so cool. We can make things better.

We’re at the point where we’re comfortable enough with the old abstractions to go past them and make new ones – more efficient ones. I think I’ve written about it before, but the ultimate UI is blank. At it’s ultimate, perfect state the program (whatever it is) should work in such a way that it always knows what you want to do. Since that’s an extraordinarily tall order, we do have to settle with buttons and elements as we do now. Gestures are good, but they don’t always work and they don’t always do what you want them to do, which goes against the above ideal. In this case the compromise is struck because it’s kept (in theory) simple and done in such a way that is easy to remember and use. Having never used it I can’t truly comment, but. With that said, if every app had it’s entirely own set of gestures (which is something we’re running into recently) it becomes even more convoluted and in the end less useful. It’s inefficient to always have to look up what the gestures to do an action is; this too goes against the ideal.

TL;DR We can make new abstractions because the UI is evolving and the new generation is used to it, which is both a power and a responsibility. And wherein I reveal my age.

Via

Intersections in an Age of Driverless Cars

Mesmerizing, isn’t it?

I’m very pro-driverless cars. Seriously. I love driving, don’t get me wrong, and there will always be racetracks because of exactly that. But for the everyday? Bring them on.

“But Brennan, doesn’t that seem dangerous? Trusting computers?” and to that I reply a simple image:

I see this at least once – often more – every time I’m out driving. Really? And we’re supposed to be worried about computers making mistakes? Ha.

So.

You sit back in your car, the seats face each other like a restaurant booth so you can easily and comfortably converse with each other. The world outside the windows whizzes by at a speed that you were very uncomfortable with at first but grew to love as it delivered you to your destination in a third the time. You arrive. You open the doors and disembark, closing them behind you. Since this is the future they’d probably have a subtle cool hydraulic hiss of pistons. You step away from the car to wherever you were going and the car pulls away silently from the curb, melting seamlessly and perfectly into the stream of traffic to pick up your spouse from work.

The intersections don’t have red lights and traffic signs in general are taken down – everything flows at once and is centrally controlled. “But Brennan!” you interrupt again “Isn’t that one step closer to a totalitarian government controlling your location?” and that, I admit, is a very good point. On the other hand, have you ever tried to go against a red light on an old country road when it’s obviously empty and hasn’t changed in ten minutes? It’s terrifying. Psychologically the red light is an overwhelming power. Would that be true for a resistance fighter’s car chase through the dystopian city? Probably not. Still, what’s the likelihood of that, anyway?

The benefits are immense and I for one welcome our driverless overlords.

Intersection video via

1963 Chrysler Turbine

It’s a shame the program ended the way it did – a few in museums and the rest destroyed. There was actually a turbine tank in the works from Chrysler and like the car counterpart wasn’t accepted mostly because it was new and different. To be fair, though, while they didn’t sell any turbine cars, they did sell a few turbine M1 tanks.

I don’t think I’ll get into the industrial design of the car itself: it’s pretty era-typical in it’s decadence. It features a circular turbine heat sink motif, but really wasn’t that different from what was on the road at the time.

So. Why a turbine, anyway? Well, Jay Leno and a 60’s educational film has you covered:

(I can’t embed with time markers, so you’ll have to manually skip to 10:14)

It should be noted at this point that Americans seem to say ‘Turbine’ like ‘Turban’ which I hadn’t heard before. We say Turbine acknowledging the ‘e’ at the end: tur-bine.

Basically, there are far less moving parts which means it runs longer without maintenance and has a much longer engine life. It doesn’t require antifreeze and can start without problems in the cold, providing instant heat for passengers. It gets much better fuel economy (the mechanism itself is much more efficient) and you can run it on basically any fuel. Exhaust gases burn much cleaner. No vibrations and negligible oil consumption; won’t stall from over burden. The engine itself is smaller and lighter than comparable internal combustion varieties.

Pretty great, from the sounds of it.

So what went wrong?

It was a combination of things, according to Bob Sheaves, Chrysler Corp. wasn’t doing so well in 1979 and in the bailout had to shut down the Defense contracts, which included the Turbine M1 project. It was public knowledge that the technology was there but has since faded into obscurity. The company had the tool and dies for Turbine car production but given the shaky economic times it was deemed too risky.

I always thought it had to do with the lag – turbines work really well at set RPM and don’t like to vary all that much, making for a delay between when you put your foot down and when you get response from the engine. This, as I’ve learned, wasn’t really an issue by the end of the program. In 1980 they had a seventh generation engine which had a lag of under one second (only a bit longer than piston versions) which was down from the notorious seven seconds seen in the first generations. If you’ve driven anything turbocharged you’ll know that spooling feeling, that delay. This was a stigma that stuck with the cars in the public’s mind.

Since it requires a higher RPM to stay efficient there were concerns with excessive fuel consumption while idling, even with it’s better efficiency. This is interesting, considering it will deliver torque at zero RPM – just starting the engine easily has enough power to push the car. Couldn’t they have just turned off the engine at every available chance?

Exhaust heat was exactly that: hot. Not only dangerous as a public concern but also that the engineering incorporated expensive alloys. I am curious, though, given how much material technology has improved in the past fifty years, if that would still be as much a problem.

The first generations were loud but if you kept watching the Jay Leno video above the noise (albeit more vacuum cleaner sounding than normal) seems reasonable inside the cabin. Again, I wonder how much that could be improved just by using modern materials and technologies.

Cost. Yup. The all defining factor. There might be less parts but each part has to be that much more exotic. Despite the awesome low maintenance of the engines, if something goes it really goes – and that’s costly. Back once more: modern materials? I’m curious and the internet doesn’t seem to have much on the subject.

You can read more here, which is where a lot of the above information comes from (cross referenced, of course).

I suspect there are a lot of really cool technologies that were invented long ago and because of the materials and methodology were impossible / impractical to make so then became forgotten. If I had the money, believe me, I’d make an entire research department that just revisits old tech like this. Alas.

It’s crazy to read about the first Apollo missions. How they managed to do anything with such limited computing power is astounding to me. I’m a new generation, I guess. I can’t long divide on paper because I haven’t needed to since grade five or whenever it was that we learned it.

Makibox $300 3D Printer

Wow.

People keep envisioning the one 3D printer per home thing and although I think it’ll eventually happen, I do believe we need to clear the first and biggest hurdle: cost to output ratio.

Right now the you can only print in ABS plastic which, while very good, doesn’t really allow for much in the way of truly useful things. Sure, figurines and little dodads are cool, but for this to effectively overrun the centralized manufacturing markets the technology is going to have to add integrated electronics printing and a multitude of materials. Look around your room, how many things are made of just one, pure material? Not many…

So because the output is limited, the cost seems relatively astronomical, even for a $1000 unit. Yes, this is much smaller / faster / cheaper than years ago, but for the everyman it’s still too expensive for what it can do for them.

Thus, you either have to increase output ability or decrease cost to match current output ability. Ideally, the two lines will intersect at some point and that will begin the mass market for these things.

The Makibox does the latter: for $300 it becomes so affordable that the material limits begin to seem alright and the entire venture looks more promising. And that’s pretty awesome.

You can read further here.

Circular CNC Mill

I had an idea the other day. There are a couple primitive version of this, but there isn’t (to my knowledge being the main point) anything CNC controlled in this configuration.

And, as opposed to the table spinning past a static cutting bit, if the bits were articulated on their own you could feasibly have several working on the same object at once without getting in each other’s way.

CNC machines can be made flexible by adding more axis in the form of an arm, but I feel like a redesign of how it’s done can yield better results while keeping the cost and complication down. The main thing, probably, is that it becomes a lot more complicated to translate 3D data into cutting data when you’re using non-standard XYZ axis. The original machine code (before the rise of practical 3D modelling) was done by entering coordinates by hand. It was computer controlled, but not nearly to the same level or complexity we have today. The processing power we already have could be used to better leverage machines configured differently.

There are disadvantages to my design as well, but less so that the straight top down approach, since I would venture to say most geometry has undercuts somewhere or other. Small improvements. You could add axis to this as well to better help things.

If you’d like to make this a reality, I’d love to develop it with you. If you intend to thief it, at least do it right. I think I’d be far more upset if someone stole my idea and then did something sub-par with it. Ideas are cheap. Execution is everything.

Onward. There’s more of the world to improve!

Blender Bokeh Overview / Tutorial

Covering both internal (node based) and the new Cycles version built into the camera settings.

On the left we have the node and on the right the Cycles camera panel. I’ll come back to that after some photographer’s physics:

Inside a camera lens is a diaphragm made up of an iris, which is a number of blades that can open and close to varying degrees allowing various amounts of light into the film / sensor. Fortunately in the digital world we don’t have to deal with things like exposure and shutter speed, so we can focus on making the aperture hole whatever we want purely with intention for depth of field (DoF).

As the aperture hole gets bigger, the F/stop number gets smaller, and DoF becomes more pronounced (bokeh gets bigger):

Confusingly, there are two numbers that both photographers and Blender programmers use. One refers to the F/stop of the aperture and one to the size of the hole it makes. They’re referring to the same measurement in reality, but for some reason the standard isn’t really upheld to use one or the other. F/stop (denoted f2, f4 etc.) is for all intents and practical purposes arbitrary (they were set holes for the old old film cameras – each hole (each Fstop) is exactly twice the amount of blur as the last). All you really need to know is that typical camera lenses are between f1.4 and f22 but most commonly around f5-f10 when taking into account the film’s exposure and whatnot. Conveniently, the Blender node simply uses this directly. Plug in some value around 5 and you’ll get a decent result.

But. You’ll notice this F-value gets smaller as the hole gets bigger. A pain. Cycles interface uses this number directly. The bigger the number, the bigger the hole, the bigger the bokeh. It makes sense and I applaud them for taking the straightforward approach for strictly digital users, but a lot of us are photographers and more comfortable with the old notation. Personal preference, I guess. Anyway, values between 0.1 and 1.0 seem to work well, based on focal point (more on that later).

There is a conversion ratio, apparently, but I tried it in a few experiment renders and it didn’t seem to be very accurate for me, so I’m not sure I’ll both posting it. Mostly, do it by test and eventually, feel.

I’ve been lurking various forums and subreddits and there have been a few comments on bokeh, what it is and how to get it. The node system sort of did it, but unreliably, and the Cycles system does do it much better. You can download my test .blend HERE to play along. It’s just an array of cubes and a few other cubes that have an emission material.


150 samples :: 23.79 s :: Cycles: 0 | 0 | 0 (appearing in order of interface: size, blades, rotation)


300 samples :: 48.04 s :: Cycles: 0.1 | 0 | 0

You’ll notice the more blur the more samples you’ll need to keep it smooth. Mine aren’t quite done yet (still a bit grainy) but I’m still in the low range of samples (150-500)

Bokeh is the word they use to describe the shape of the out of focus parts. Because of the lens focus and physics of optics and so on it takes the shape of the diaphragm the light is focused through. So, if the lens has a lot of  blades in the iris it’ll be more circular (common) and sometimes the aperture only has 5 or six blades, creating penta and hexagon bokeh respectively. You will see in movies triangle and diamond bokeh for stylistic effects and if you put a cutout in front of the lens you can make it whatever shape you want. Typically, circles are the smoothest and hexagons are used for science fiction movies. Subtle differences, but those angles can really change the feel of a scene.


500 samples :: 1.21.23 :: Cycles: 0.3 | 0 | 0


500 samples :: 1.21.60 :: Cycles: 0.3 | 6 | 0

The difference between 0 (circle) and 6 (hexagon) bokeh. Notice the best looking bokeh comes from the smaller cubes in the back (to the left) – the large ones just blow it all out.

Both the node and Cycles has this easily built in, simple select the number of sides you want from 0 (perfect circle) to octagon. After that, polygons tend to look like circles anyway, so there isn’t any need to go higher. Or, in Cycles, simple type in a number for # of blades. (0, 3-9). Both have an angle / rotation as well. This is the rotation (in degrees) that that polygon gets rotated. Say if you’re using a pentagon and you wanted the point up or down, you could angle it as you’d like. If you’re trying to simulate a specific lens just find a source image and look at how it’s diaphragm is rotated to rotate your bokeh accordingly. Note this does not effect the size at all.


500 samples :: 1.21.24 :: Cycles: 0.3 | 6 | 30

So we can rotate the hexagon bokeh 30 degrees to make the flat edge down. 360 / 6 / 2 = 30


1000 samples :: 2.51.62 :: Cycles: 0.6 | 6 | 30


1000 samples :: 2.51.29 :: Cycles: 0.6 | 9 | 0

Taking it to the extreme: 0.6 creates a lot of blur and needs a lot more samples. Almost 3 minutes vs. 23 seconds for the 0 DoF control render.

Since bokeh is the effect of light’s focus on the film / sensor, the amount of light defines a lot about the resulting effect. Light sources are often the cause of the prevalent bokeh in the background of scenes – makes sense, they’re putting out light. Specular reflections on geometry can also make bokeh, but the effect will typically be diminished because the material is absorbing some of the energy in the bounce. Likewise, glossy materials are better than matte materials for this; rarely will you get any real effect from a matte material since it absorbs and scatters most of the light.

The size of the bokeh, as mentioned briefly above, is directly linked to the aperture size – the amount of DoF blur. If you aren’t getting the desired effect it’s because either there isn’t enough light power to create it or that the ‘lens’ is too in focus, meaning there isn’t enough blur to get anything good. Typically, it’ll be the former. This is where the node DoF struggles, the threshold for creating bokeh seems a little off and it takes a really intense light value in the scene to get results. It’s a tradeoff, though, because you can easily make things too blurred and start moving into tiltshift area.

Now, there is another reason the camera might be too in focus and it has to do with the focus distance. The closer the focus is to the camera, the more DoF blur you’ll get, even at the same aperture size. If the focus is extremely far away, you’ll have to compensate with an irrationally large iris to maintain the same amount of blurring effect.

It seems, in my informal tests so far, that the node based one is much faster but Cycles is much more accurate, especially when reflections are concerned (node isn’t smart enough – will reflect perfect focus instead of realistic reflected bokeh). The more blur you have, the more samples you’ll need to make it smooth. There seems to be a curve to this, where minor blur increases = major extra samples. Just keep it in mind. You can use nodes compositing on top of a Cycles render, so it isn’t limited just to internal.

Let’s review:
-F/stop between ~2-10 (2 being lots, 10 being less)
-Cycles size between ~0.2-1.0 (0.2 being less, 1.0 being lots)
-Shape defines the appearance of the bokeh
-Rotation rotates that shape
-Distance to camera and lighting power do effect bokeh intensity / size
-Nodes are faster, Cycles and more accurate

Credits to everyone who’s image I googled; all link through.

You can download my demo scene HERE to play with.

Three Cubes Colliding – 3D Printed Kite

It almost looks unreal – like a camera tracking test or something to superimpose 3D geometry into a real shot. I like that sort of ethereal effect in design. There is a very small list of things that exist that you look at and wonder how the shapes arrived at that particular configuration. I don’t know too much about kites, so I’d be really interested how they figured out the triangle pattern they used.

In any event, it’s a really cool triumph of 3D printing being used in a practical, real world application instead of flimsy models that would later be cast in something stronger. If you can eliminate that 2nd step of the process and cut straight to strong, easily made materials it would have a really good impact on how we make everything in society.

Awesome work, Queen and Crawford

Via


(c) 2017 ACRYLO | powered by WordPress