It is with great joy that we welcome the newest member of the Floored family into our shop: the 24x18 Professional Series CO2 cutter from Full Spectrum Laser. Hardware enthusiasts, take note.
We’re almost ready to take it for a spin — stay tuned for some test engravings.
Good poop == good health. Caretakers, dog-walkers, baby-sitters, etc. can track and communicate the health of their charges. Track down allergies, bad eating habits, and illnesses.
Gaming
Make logs with friends! Compete for longest streak of firm poops. Leave poops on their Facebook wall.
Roadmap
Here are just a few of the future features:
Variables like viscosity, size, volume
Textures for different foods
Social: share your triumphs on Twitter and Facebook
Embeddable 3D viewer for putting interactive logs on any site
The year is flying by and periodically we get behind on updating the blog, which brings me great shame. Particularly when it robs you, our loyal reader, of important news about our company and the world.
So without further ado, here’s a quick roundup of what the world has been saying about us over the past few months:
The is the first press piece we’ve had where the journalists actually came out with us to scan two totally different properties: an open, empty commercial space which we scanned with the Faro Focus, and a dense, furnished residential space which we scanned with the Matterport camera. Finally, we end with viewing our models with the Oculus Rift, a cool tour de force of some of the technologies we work with, in addition to our own.
An excellent overview article on what is state of the art in the world of real estate software. There’s a great profile of Floored in there, along with features on our friends at Honest Buildings, Compstak, Fundrise & Urban Compass. Two highlights from this particular article are excellent references from our customers, Hines and Taconic Investment Partners, and being called “the sexiest thing in the industry right now.”
We were extremely fortunate to win our category at the GREAT Tech Awards, which took me over to London last week to explore opening up an office in the UK in 2014. The trip was very successful in that we both identified a series of meaningful customer engagements overseas and also laid the groundwork for a partnership with the government’s construction strategy group.
Quick profiles of some of the high fliers in CRE, many of the same players featured as in the Real Deal article, plus CRE stalwart View the Space. Pithy, punchy analysis about why each company “matters” in the industry as well.
As always, we’re grateful for such flattering reviews of the company and prognostications about our future! Any questions or comments, feel free to leave a note on the blog post!
How do we get movie-quality lighting in real time?
Traditional 3D pipelines take a number of shortcuts when simulating lighting in a scene in order to get realtime performance. The single largest shortcut is to disregard global illumination when calcuating lighting in the scene and instead only look at local, direct lighting. This is much more efficient but also much less realistic. There has been a lot of research around ways to efficiently capture some of the feel and realism of a movie-quality global illumination simulation without the extra overhead, and one of the most popular approaches is known as ambient occlusion (AO).
What is ambient occlusion?
Ambient occlusion is a local approximation of global illumination in a scene estimated by computing the visibility around a given point. E.g., corners don’t “see” as much of the scene as flat planes so they would intuitively receive less light, and this is exactly what ambient occlusion is getting at. In essence, AO makes scenes look more realistic by darkening areas that wouldn’t receive as much indirect light at a fraction of cost of a full global illumination simulation.
Screen-space ambient occlusion
The most common way of incorporating ambient occlusion into a modern realtime graphics pipeline is via some variant of screen-space ambient occlusion (SSAO), an estimation of AO that is calculated each frame per-pixel. SSAO fits in particularly well to a modern, deferred rendering pipeline such as Luma as a post-process effect operating on the g-buffer, where the depth values of scene geometry are viewed as a coarse-grained heightmap such that pixels with lots of occluders nearby in the heightmap approximation will have a higher amount of occlusion.
Comparing SSAO Methods in Luma
Currently, we’ve implemented three of the most popular SSAO techniques:
For all the algorithms, we perform an edge-aware blur pass over the AO buffers to reduce noise introduced by sampling artifacts and we support AO generation at a lower resolution with bilateral upsampling to match the render target size in the blur pass to improve performance.
Here is a direct comparison of the filtered AO buffers for the three techniques, with default settings applied.
And here is a corresponding comparison of the rendered scene with AO applied. Hovering over a screenshot will display the base render without ambient occlusion. Note that the intensity of the effect has been increased for the purposes of this visualization, but in general all three SSAO techniques capture geometric creases and contact shadows between objects.
Interactive SSAO demo
You can try various parameters for each method and compare the results for yourself in the following demo:
### SAO produces the best results
With respect to visual quality, SAO is capable of producing similar results to HBAO but at a fraction of the cost, and the basic SSAO version can only compete by bumping up the sampling rate to an unrealistic level for realtime performance. With carefully tuned settings to produce similar quality results, SAO requires ~9 texture fetches per pixel (SAO sample count set to 8), HBAO requires ~50 texture fetches per pixel (HBAO 7 sample directions and 7 steps along each direction), and basic SSAO requires ~17 texture fetches per pixel (SSAO sample count set to 16), though the output quality of basic SSAO is still not as good as HBAO or SAO.
The main difference between basic SSAO and HBAO/SAO is that basic SSAO takes occlusion samples in camera-space within a normal-oriented hemisphere around the camera-space position of a given fragment, whereas HBAO and SSAO both perform the sampling in screen-space and then project those samples back to camera-space in order to determine occlusion.
A number of existing WebGL demos and applications implement basic SSAO and could be improved at no additional performance cost by switching to SAO.