Verge3D Puzzles Visual Logic

In this overview of the visual logic system used by Verge3D, I show you how the Blockley based visual programming logic can build interactivity into your 3D web applications.  In the Verge3D SDK, this system is called “Puzzles”.

With this system, you can manipulate objects, materials and animations from your Blender scene.  Since Verge3D uses the three.js engine, you can go from Blender to three.js without any coding.  Three.js has been a popular and powerful 3D platform for a long time.  But unless you did some heavy JavaScript coding, it was not very accessible.   Now, Verge3D has bridged that gap.  You can build a scene in Blender (more 3D CAD platforms are planned), handle the logic in the Puzzles editor and publish directly to a web-ready WebGL format.

Download the .blend file for this project here.

Verge3D Launches

Verge3D has just launched their first release.  Version 1.0 starts out with a suite of powerful features.

Key Features…

  • One click HTML preview render
  • Visual logic programming using Blockley
  • Utilizes the three.js engine with no coding required
  • PBR Materials
  • Local development server
  • Browser based Application Manager
  • Utilizes glTF (GL Transmission Format)

Verge3D is a 3D web development platform that uses Blender (other CAD platforms are planned) to create 3D web content. Using the already-powerful three.js engine and the glTF transmission format, Verge3D enables you to create objects, scenes and entire 3d web applications for your website. This platform includes a local development server and a browser-based application manager. A powerful visual coding function allows artists to add logic functions to their application with no coding required. A one-click HTML preview allows you to export your scene instantly to see how it will handle in a browser.
You can download Verge3E at soft8soft.com

 

Antenna Line-Of-Sight Evaluation

3D Line of site evaluation for long range wifi antenna alignment

Calculating a line-of-sight signal path can be a challenge if there are trees obstructing your view of the topography.  A hillside or too many trees can kill your signal.  So how can we draw a perfectly straight line over one KM long to find out what our signal will need to penetrate?  Using a digital 3D model made with imagery acquired with a drone, we can draw a line from point to point and explore the obstacles  along the way.  We can also use this model to help us point our directional antenna.  Since we wanted to keep the maximum detail on this model, the file size is quite large (85 MB).  You can view it live (expect some loading time) or download the HTML file for off-line viewing.

Live Demo        Download

Here are the steps to making a model like this:

  • Mission flown with a DJI Phantom 4
  • DroneDeploy mobile app was used to pilot the drone
  • Pix4D was used to process the imagery
  • Blender was used for 3D editing and export to HTML

In order to get good tree detail, we flew this mission at 400 feet with 90% front lap and 80% side lap.  Flying high and using high image overlap is key to getting treed areas.  If you want to work with imagery like this, feel free to contact us.  If you are outside the North Idaho area, we can work with you or a drone pilot near you to acquire the imagery.  Flying the drone is pretty easy.   You basically draw out a shape on a map, then add in altitude and image overlap.  The drone then flies the route and takes a batch of images.  If you are a drone pilot, you can add this kind of service without any additional hardware.  As time allows, I can process a trial batch of images for free so you can see how it works.

Contact

Drone Mission For 3D Imaging

Screen capture from DroneDeploy app

A while back, I did a video series on the tool-chain required to get 3D drone acquired imagery processed into HTML format so anyone can view it.  I have had some requests to give more details on the drone mapping mission phase.  Of all the steps in the process, flying the drone mission is probably the easiest.  And if you do not want to specialize in all the other technology, flying your own mission will give you the image sets to send out for processing.

When you fly an automated mapping mission, you will be using a different app to pilot your drone.  The DJI Go app is probably what you are used to using but a mapping app can fly your drone as well.  There are a number of free apps that will work but I use an app called Drone Deploy.  They have a paid processing service but the app is free.  You can create a free account with Drone Deploy and plan your flight on a PC or right in the app.

In the planning phase, you will draw out the area you wish to map, choose an altitude and overlap.  the images need to overlap by 60-90% (front lap and side lap) depending on what you are imaging.  Detailed features like trees are more challenging and need more overlap from a higher altitude.  A flat field could be mapped at 200 feet with 60% overlap with great results.  Once you are ready, you hit the ‘Fly’ button and your drone will take off, fly the grid and land near the take-off point.  You will then have a batch of images stored in the drone’s SD card.  At this point I look at the images and remove any that may not be needed.  It will speed up processing if duplicate or oblique images are removed.  When your drone makes its turns, sometimes it takes photos while turning that are rotated but redundant.  so I usually remove two or three images from each turn and any images at the beginning or end that do not fit the grid.

Scanned with a Phantom 4, Exported 3D model with Blend4Web

If you want to do more specialized imaging, like 3D scanning a structure, you will need to add in some oblique photo angles.  Basically, run your drone around the object in ascending circles to get side images.

Now that you have your batch of images, they are ready to be processed into a 3D model (or georeferenced image).  There are some free trial services you can try.  Pix4D and DroneDeploy both have free trial options.  OpenDroneMap is open source and free, though a bit more technical to get it up and running.  3DXMedia can go a step further and process your imagery into a web ready HTML format.  If you are interested having this done or learning how to do it yourself, feel free to contact us.

Contact

Blend4Web Now Supports Leap Motion

The Leap Motion being used with a normal PC

With release 17.08, Blend4Web now supports use of  Leap Motion hand tracking technology.  Never before have we been able to reach into a website and pick something up.  Here is a little demo video produced by the Blend4Web team:

This video shows the technology being used with a standard PC but since Blend4Web already supports WebVR, it is only a matter of some configuring to get your hands into VR as well.  We will be attempting that in the coming days.  Most of the hype around the Leap Motion hand sensing technology is centered around VR.  After seeing it used with a normal PC on a page loaded over the internet, it occurs to me that there could be much larger implications for this.  It is far cheaper and easier to set up than VR and the number of website viewers vastly exceeds the number of VR users.

Another area of use that could have big implications is mobile VR.  This is already in the works.  The reason I think this will be huge, is that with a mobile device stuck to your face, you lose the ability to interact with the touch-screen.  The Leap Motion could conceivably give your more control possibilities than the touch-screen ever did.  Browsing the web on a mobile device has always been a little annoying to me, like trying to watch a movie through a hole in the wall.  With this technology, the web page could be a large screen hovering before your with you hands free to interact with it.

Already have your Leap Motion? Try the demo:

Live Demo

 

Everest Panorama Project Wins Reddot Award

A new project created with Blend4Web is buzzing around social media.  I first read about this project in this article by Andrey Prakhov.  This amazing web application includes a 3D map of the Mt. Everest region as well as panorama pictures taken at various points all the way up to the peak.

The Everest Panorama project powered by Blend4Web

Live Demo

 This is a very polished project complete with voice narration in Russian and English.  What this amounts to is a virtual tour of Mt. Everest.  You can navigate up the path to the peak in a very similar fashion that Google Street view works. It’s as close to Everest as your ever going to get while sitting at your desk!  I tried this in both Chrome and Mozilla.   They both ran it very well.  Hint: the F11 key puts your browser in full screen mode, works well with apps like this.
Oh, and did I mention?  This project won the reddot award!
I found a very nice video work-up about this project on YouTube (English subtittles).

 

3D Scanning With A Cell Phone

Photogrammetry is done by taking a batch of overlapping images and processing them with special software to reconstruct a 3D representation of the real-world object.   Drone mapping and 3D scanning are forms of photogrammetry.  The tool-chain I used for scanning this rock starts with a normal cellphone camera.  I took and array of images in a dome pattern, keeping the rock in the center of all images and filling most of the view-port with the rock.  Once I had my batch of photos, I processed them with OpenDroneMap.  This is free, open source software designed for processing drone images but it works fine with a bunch of pictures from your cellphone as well.  Once processed, I had a textured mesh in the form of an OBJ file.  I imported this into blender for editing.  I trimmed all the  ragged edges and to improve file size, I re-baked the textures.

I should have mentioned in this video that in order to get the benefits of the single material and texture, you need to delete the original materials from the object when you are done with the bake.  Otherwise, they are still attached, even though they don’t show up.  If you don’t remove them from your object, it leaves your file size for the HTML export just as large.

Finally, the one-click HTML export using the Blend4Web add-on.

This rock was scanned using a normal cell phone camera then exported to HTML format with Blend4Web

This gave me a very detailed 3D object that is web ready and can be viewed by anyone without special software.  Wanna see?  Click the link below.  These 3D scenes contain lots more data than a typical website so expect a longer loading time.

Live Demo