It's a Laser Cutter!

It is with great joy that we welcome the newest member of the Floored family into our shop: the 24x18 Professional Series CO2 cutter from Full Spectrum Laser. Hardware enthusiasts, take note.

54 inches; 242 pounds.



Nick F. & Dustin get under the hood.
The laser itself.

We’re almost ready to take it for a spin — stay tuned for some test engravings.

Interactive 3D Visualization for Everyday Use

We had a WebGL hackathon a couple months ago; it was a lot of fun and we were super impressed by everyone who came.

As the hosts of the event, we knew we had to do something really good. Something that everyone can use. Something where 2D just won’t do it justice.

A Star is Born

Introducing the Pooperator, an interactive 3D poop:

animated pooperator preview

Use Cases

Education

3D is already being pioneered as a teaching tool for kids. A 3D version of he Bristol Stool Scale can be much more realistic and educational.

Health

Good poop == good health. Caretakers, dog-walkers, baby-sitters, etc. can track and communicate the health of their charges. Track down allergies, bad eating habits, and illnesses.

Gaming

Make logs with friends! Compete for longest streak of firm poops. Leave poops on their Facebook wall.

Roadmap

Here are just a few of the future features:

  • Variables like viscosity, size, volume
  • Textures for different foods
  • Social: share your triumphs on Twitter and Facebook
  • Embeddable 3D viewer for putting interactive logs on any site
  • Journal of your fecal history
  • Leap Motion controls for viewing
  • Hardware addon for toilet bowls to generate 3D models

Have other ideas? Check it out on Github: https://github.com/floored/pooperator.

Progress Towards a Semantic Understanding of 3D Office Spaces

Automatically understanding what we’ve scanned is at the core of generating next-gen real estate visualization.


If you’re a software engineer and would like to work on these problems with us, drop us a line.

Press Roundup

The year is flying by and periodically we get behind on updating the blog, which brings me great shame. Particularly when it robs you, our loyal reader, of important news about our company and the world. 

So without further ado, here’s a quick roundup of what the world has been saying about us over the past few months:

Wall Street Journal: “3D set to transform real estate market”

The is the first press piece we’ve had where the journalists actually came out with us to scan two totally different properties: an open, empty commercial space which we scanned with the Faro Focus, and a dense, furnished residential space which we scanned with the Matterport camera. Finally, we end with viewing our models with the Oculus Rift, a cool tour de force of some of the technologies we work with, in addition to our own.

Venturebeat via the Real Deal: “Real Estate’s Hot New Entrepreneurs”

An excellent overview article on what is state of the art in the world of real estate software. There’s a great profile of Floored in there, along with features on our friends at Honest Buildings, Compstak, Fundrise & Urban Compass. Two highlights from this particular article are excellent references from our customers, Hines and Taconic Investment Partners, and being called “the sexiest thing in the industry right now.”

NY Daily News: “New York Tech Startups Awarded Opportunity to Launch in London”

We were extremely fortunate to win our category at the GREAT Tech Awards, which took me over to London last week to explore opening up an office in the UK in 2014. The trip was very successful in that we both identified a series of meaningful customer engagements overseas and also laid the groundwork for a partnership with the government’s construction strategy group.

Urban Land Institute: “CRE Tech Companies to Watch

Quick profiles of some of the high fliers in CRE, many of the same players featured as in the Real Deal article, plus CRE stalwart View the Space. Pithy, punchy analysis about why each company “matters” in the industry as well. 

As always, we’re grateful for such flattering reviews of the company and prognostications about our future! Any questions or comments, feel free to leave a note on the blog post!

- Dave

Implementing 3 Screen Space Ambient Occlusion Methods in WebGL

How do we get movie-quality lighting in real time?

Traditional 3D pipelines take a number of shortcuts when simulating lighting in a scene in order to get realtime performance. The single largest shortcut is to disregard global illumination when calcuating lighting in the scene and instead only look at local, direct lighting. This is much more efficient but also much less realistic. There has been a lot of research around ways to efficiently capture some of the feel and realism of a movie-quality global illumination simulation without the extra overhead, and one of the most popular approaches is known as ambient occlusion (AO).

What is ambient occlusion?

Ambient occlusion is a local approximation of global illumination in a scene estimated by computing the visibility around a given point. E.g., corners don’t “see” as much of the scene as flat planes so they would intuitively receive less light, and this is exactly what ambient occlusion is getting at. In essence, AO makes scenes look more realistic by darkening areas that wouldn’t receive as much indirect light at a fraction of cost of a full global illumination simulation.

Screen-space ambient occlusion

The most common way of incorporating ambient occlusion into a modern realtime graphics pipeline is via some variant of screen-space ambient occlusion (SSAO), an estimation of AO that is calculated each frame per-pixel. SSAO fits in particularly well to a modern, deferred rendering pipeline such as Luma as a post-process effect operating on the g-buffer, where the depth values of scene geometry are viewed as a coarse-grained heightmap such that pixels with lots of occluders nearby in the heightmap approximation will have a higher amount of occlusion.

Comparing SSAO Methods in Luma

Currently, we’ve implemented three of the most popular SSAO techniques:

  1. A slightly modified version of Crytek’s original Screen-Space Ambient Occlusion (SSAO), ~2007
  2. Nvidia’s Horizon-Based Ambient Occlusion (HBAO) ~2008
  3. A modified, newer version of the Alchemy algorithm, Scalable Ambient Obscurance (SAO) ~2012

For all the algorithms, we perform an edge-aware blur pass over the AO buffers to reduce noise introduced by sampling artifacts and we support AO generation at a lower resolution with bilateral upsampling to match the render target size in the blur pass to improve performance.

Here is a direct comparison of the filtered AO buffers for the three techniques, with default settings applied.

SSAO visualization
Basic SSAO
HBAO visualization
HBAO
SAO visualization
SAO

And here is a corresponding comparison of the rendered scene with AO applied. Hovering over a screenshot will display the base render without ambient occlusion. Note that the intensity of the effect has been increased for the purposes of this visualization, but in general all three SSAO techniques capture geometric creases and contact shadows between objects.

Luma render with basic SSAO Luma render with no AO
Basic SSAO
Luma render with HBAO Luma render with no AO
HBAO
Luma render with SAO Luma render with no AO
SAO

Interactive SSAO demo

You can try various parameters for each method and compare the results for yourself in the following demo:


### SAO produces the best results

With respect to visual quality, SAO is capable of producing similar results to HBAO but at a fraction of the cost, and the basic SSAO version can only compete by bumping up the sampling rate to an unrealistic level for realtime performance. With carefully tuned settings to produce similar quality results, SAO requires ~9 texture fetches per pixel (SAO sample count set to 8), HBAO requires ~50 texture fetches per pixel (HBAO 7 sample directions and 7 steps along each direction), and basic SSAO requires ~17 texture fetches per pixel (SSAO sample count set to 16), though the output quality of basic SSAO is still not as good as HBAO or SAO.

The main difference between basic SSAO and HBAO/SAO is that basic SSAO takes occlusion samples in camera-space within a normal-oriented hemisphere around the camera-space position of a given fragment, whereas HBAO and SSAO both perform the sampling in screen-space and then project those samples back to camera-space in order to determine occlusion.

A number of existing WebGL demos and applications implement basic SSAO and could be improved at no additional performance cost by switching to SAO.

More SSAO References

  1. A bit more deferred - CryEngine3
  2. Stable SSAO in Battlefield 3 with Selective Temporal Filtering
  3. Toy Story 3: The Video Game Rendering Techniques
  4. A Simple and Pratical Approach to SSAO
  1. Volumetric Obscurance (VO)
  2. Multiresolution Ambient Occlusion (MSSAO)
  3. Separable Approximation of Ambient Occlusion (SSSAO)
  4. Approximating Dynamic Global Illumination in Image Space (SSDO)