Chapter 4: 3D Building Rendering Pipeline#
Welcome back! In our journey so far, we've built a solid foundation: * In Chapter 1: Data Types & Models, we learned about the structured blueprints for our data, including geographic shapes (GeoJSON) and details about photos. * In Chapter 2: Building Data API Client, we saw how to use an API client to fetch real building geometry data (in GeoJSON format) and related camera information from an external service. * In Chapter 3: Geographic Utility Functions, we discovered how to use utility functions to perform calculations on this geographic data, like figuring out the land area and potential building height/volume based on user inputs.
Now we have all the pieces of data β the shape of the land, potentially the calculated height and volume, and even information about where a camera might have been located. But this data is just numbers and text in our program. How do we make it appear as a visible, interactive 3D building on a map for the user to see?
Bringing Data to Life: The Rendering Pipeline#
Imagine you're an architect. You have blueprints (our GeoJSON data), details about the size and height (from the API or calculated in Chapter 3), and perhaps some notes about where specific photos were taken. To show your client what the final building will look like, you create a rendering β a detailed, visual representation.
The 3D Building Rendering Pipeline is our application's version of this architect's rendering process. It's the part of the code responsible for taking the raw geographic and building data and transforming it into the visual elements displayed on the map using a powerful library called Deck.gl.
Its central task is to define how the building's ground plan, the extruded 3D shape, and other visual markers (like camera locations) are represented visually on top of our base map (which uses Mapbox).
Our main goal in this chapter is to understand how we use the data we've gathered to create the visual 3D layers that Deck.gl can display.
Creating the Visual Layers#
Deck.gl works by drawing layers on top of a map. Think of layers like transparent sheets placed one on top of another. Each sheet can have different things drawn on it β one sheet might have the park outlines, another might have roads, and another could have our buildings.
For our 3D building visualization, we need a few different layers:
- Ground Plan Layer: Shows the outline of the building's footprint or the land plot on the ground.
- Building Layer: Extrudes the ground shape upwards to create the 3D block of the building.
- Camera Marker/Model Layer(s): Displays visual markers or simple 3D models representing where photos were taken or where the camera was positioned (based on the
cameraGPSData
from the API).
The building-height
project has a dedicated function, createBuilding
, that acts as our rendering pipeline's main entry point for a single building visualization. It takes the necessary data inputs and produces the list of Deck.gl layers needed to draw the building and related elements.
How to Use the Rendering Pipeline (createBuilding
)#
The primary function in our rendering pipeline is createBuilding
, located in src/utils/deckgl-utils.ts
. This function takes the fetched building geometry (GeoJSON) and camera data and generates the Deck.gl layers.
Here's how you would typically use it after fetching data from the API (as seen in Chapter 2):
import { createBuilding } from "./utils/deckgl-utils";
// Assume 'fetchedBuildingData' is the object returned by fetchBuilding
// from Chapter 2, containing 'geojson' and 'cameraGPSData'.
const fetchedBuildingData = {
geojson: {
type: "FeatureCollection",
features: [
{
type: "Feature",
geometry: {
type: "Polygon",
coordinates: [
[
[-0.1278, 51.5074],
[-0.1278, 51.5075],
[-0.1277, 51.5075],
[-0.1277, 51.5074],
[-0.1278, 51.5074], // Close the loop
],
],
},
properties: {
// The example code reads height from here
relativeheightmaximum: 30,
absoluteheightminimum: 10
},
},
],
},
cameraGPSData: [
{ coordinates: [-0.12775, 51.50745, 50], bearing: 90, altitude: 50 }, // Example camera point
],
};
// Call the function to create the layers
const deckglLayers = createBuilding(
fetchedBuildingData.geojson,
fetchedBuildingData.cameraGPSData
);
console.log("Created Deck.gl layers:", deckglLayers);
// deckglLayers is now an array of Deck.gl Layer objects,
// ready to be added to the map visualization!
When you call createBuilding
with the GeoJSON data and camera data, it processes this information and returns an array containing several Deck.gl Layer
objects. These layers are configured to visualize the building shape and camera positions according to the data they were given. This array of layers is exactly what Deck.gl needs to draw on the map.
Under the Hood: Inside createBuilding
#
Let's take a look inside the createBuilding
function (src/utils/deckgl-utils.ts
) to see how it constructs these layers.
The Process (Non-Code Walkthrough)#
Here's a sequence of steps createBuilding
performs:
- Receive Inputs: The function takes the
building
(GeoJSONFeatureCollection
) andcameraGPSData
arrays. - Create Ground Layer: It uses a
GeoJsonLayer
from Deck.gl, passing the building GeoJSON data to it. It configures properties like fill color (light green) and line color (black) for the ground plan outline. - Prepare 3D Data: It extracts the building's coordinates from the GeoJSON. It also gets the
relativeheightmaximum
andabsoluteheightminimum
from the GeoJSON properties to determine the base and height of the 3D shape. (Note: While Chapter 3 discussed calculating height, the provided rendering code snippet uses height information from the fetched GeoJSON for the extrusion. In a real application, you might use the calculated height here instead). It modifies the coordinates to include the base height as the Z-coordinate. - Create 3D Building Layer: It uses a
PolygonLayer
from Deck.gl, configured forextruded: true
. It passes the prepared 3D coordinates and sets thegetElevation
property to the building height obtained from the GeoJSON properties. It sets colors for the extruded sides and wires. - Create Camera Model Layer: It uses a
ScenegraphLayer
from Deck.gl. It passes thecameraGPSData
and points the layer to a 3D model file (cam.gltf
). It uses the coordinates and bearing fromcameraGPSData
to position and orient the model. - Create Camera Icon Layer: It uses an
IconLayer
from Deck.gl. It also uses thecameraGPSData
and points to an icon image atlas. It uses the coordinates to position 2D markers on the map. - Collect Layers: It puts all the created layer objects (ground, 3D building, 3D camera model, 2D camera icon) into a single array.
- Return Layers: It returns this array of layers.
Here's a simplified diagram of this process:
sequenceDiagram
participant Caller as Code Calling Function
participant CB as createBuilding Function
participant DeckGL as Deck.gl Library
Caller->>CB: Call createBuilding(geojson, cameraData)
CB->>DeckGL: Create GeoJsonLayer (Ground)
CB->>CB: Prepare 3D coordinates
CB->>DeckGL: Create PolygonLayer (3D Building)
CB->>DeckGL: Create ScenegraphLayer (3D Camera)
CB->>DeckGL: Create IconLayer (2D Camera)
DeckGL-->>CB: Return Layer Objects
CB->>CB: Collect Layer Objects into Array
CB-->>Caller: Return Array of Layers
Code Details#
Let's look at simplified snippets from src/utils/deckgl-utils.ts
:
First, importing the necessary layer types from Deck.gl:
// src/utils/deckgl-utils.ts
import { Layer } from "@deck.gl/core/typed";
import { GeoJsonLayer, IconLayer, PolygonLayer } from "@deck.gl/layers/typed";
import { ScenegraphLayer } from "@deck.gl/mesh-layers/typed";
// ... imports for data types like FeatureCollection (Chapter 1)
// ... createBuilding function starts here ...
This brings in the specific tools (Layer classes) we need from Deck.gl to draw different kinds of things.
Creating the Ground Layer:
// src/utils/deckgl-utils.ts (inside createBuilding)
const ground = new GeoJsonLayer({
id: "geojson-ground-layer", // A unique name for the layer
data: building, // The GeoJSON data we passed in
getLineColor: [0, 0, 0, 255], // Outline color (black)
getFillColor: [183, 244, 216, 255], // Fill color (light green)
getLineWidth: () => 0.3, // How thick the outline is
opacity: 1, // Make it fully visible
// ... other configuration ...
});
// ... rest of the function ...
This code creates an instance of GeoJsonLayer
. We give it a unique id
, tell it which data
to use, and configure its appearance using getLineColor
and getFillColor
(arrays of RGBA values).
Preparing Data for the 3D Building and Creating the 3D Layer:
// src/utils/deckgl-utils.ts (inside createBuilding)
// Extract building outline coordinates
const buildingDataCopy = [
...(building.features[0].geometry as any).coordinates,
];
// Get height from GeoJSON properties
const buildingHeight = parseFloat(
building.features[0].properties?.relativeheightmaximum
);
const baseHeight = parseFloat(
building.features[0].properties?.absoluteheightminimum
);
// Add the base height (Z coordinate) to each point in the outline
let buildingCoords = buildingDataCopy[0].map((item: any) => {
item.push(baseHeight); // Add the base height as the Z coordinate
return item;
});
// Structure the data for PolygonLayer
const polygonData = [
{
contour: buildingCoords, // The list of 3D points forming the shape
},
];
const storey = new PolygonLayer({
id: "geojson-storey-building", // Unique name
data: polygonData, // Our prepared data
extruded: true, // This is the key! Tells Deck.gl to make it 3D
wireframe: true, // Show wireframe lines on the sides
getPolygon: (d) => {
return d.contour;
}, // How to get the shape points from our data
getFillColor: [249, 180, 45, 255], // Color of the extruded sides (orange)
getLineColor: [0, 0, 0, 255], // Color of the wireframe (black)
getElevation: buildingHeight, // How tall to extrude it
opacity: 1,
// ... other configuration ...
});
// ... rest of the function ...
This is the core of the 3D rendering. We prepare the coordinates by adding a Z value (the baseHeight
). Then, we create a PolygonLayer
. Setting extruded: true
is what tells Deck.gl to turn a flat 2D shape into a 3D volume. getElevation
tells it how high to extrude the shape upwards from its base height. We also configure its colors and appearance.
Creating the Camera Layers:
// src/utils/deckgl-utils.ts (inside createBuilding)
const url = "./cam.gltf"; // Path to a 3D model file
const exif3dCameraLayer = new ScenegraphLayer({
id: "exif3d-camera-layer", // Unique name
data: cameraGPSData, // The camera info from the API
scenegraph: url, // The 3D model to display
getPosition: (d) => d.coordinates, // How to get the location [lon, lat, alt]
getColor: (d) => [203, 24, 226], // Color of the model (purple)
getOrientation: (d) => [0, -d.bearing, 90], // How to rotate the model using bearing
pickable: true, // Make it interactive
opacity: 1,
// ... other configuration ...
});
const deckglMarkerLayer = new IconLayer({
id: "exif-icon-layer", // Unique name
data: cameraGPSData, // The camera info from the API
getIcon: () => "marker", // Use a predefined icon shape
iconAtlas: "...", // URL pointing to the image containing icons
iconMapping: { ... }, // Describes where the 'marker' icon is in the image
getPosition: (d) => d.coordinates, // How to get the location [lon, lat] (IconLayer doesn't need altitude here)
getColor: (d) => [Math.sqrt(d.exits), 140, 0], // Color of the icon (example uses 'exits' data field)
getSize: () => 5, // Base size of the icon
sizeScale: 8, // Scaling factor for the size
billboard: true, // Make the icon always face the camera
pickable: true,
// ... other configuration ...
});
// ... rest of the function ...
These snippets show how to add visual markers for the camera location. ScenegraphLayer
is used to load and display a 3D model (.gltf
file) at the specified getPosition
using the cameraGPSData
. getOrientation
is used to point the model in the correct direction based on the camera's bearing
. IconLayer
is used to display a simple 2D marker image, also using the getPosition
from the camera data.
Returning the Layers:
// src/utils/deckgl-utils.ts (inside createBuilding)
// ... layer creations ...
// Return an array containing all the layers we created
return [ground, storey, exif3dCameraLayer, deckglMarkerLayer];
};
Finally, the function collects all the layer objects it created and returns them as an array. This array is then used by the main map component to render everything.
Connecting to Previous Chapters#
As you can see, this rendering pipeline relies heavily on the outputs from the previous steps:
- It takes the structured geographic data (GeoJSON) and camera information, which we learned how to define using Data Types & Models (Chapter 1) and how to fetch using the Building Data API Client (Chapter 2).
- Although the current
createBuilding
snippet uses height from the fetched GeoJSON, the concept of building height being used forgetElevation
ties into the calculations we discussed in Geographic Utility Functions (Chapter 3), where we calculated potential building height based on user input and land area. In a more advanced version, you might pass the calculated height intocreateBuilding
and use that instead of the height from the GeoJSON properties.
This pipeline acts as the bridge, taking the raw data and calculated results and turning them into a compelling visual output on the map.
Conclusion#
In this chapter, we explored the 3D Building Rendering Pipeline. We learned that its purpose is to translate the raw geographic and building data, often fetched from an API (Chapter 2) and structured according to our data types (Chapter 1), into visible, interactive 3D layers on a map using Deck.gl. We saw how the core createBuilding
function takes this data and generates different types of Deck.gl layers β a ground plan, an extruded 3D building shape, and visual markers for camera positions β ready to be added to the map visualization. We looked under the hood at how it uses Deck.gl layer types and properties to achieve this transformation.
Now that we know how to get data, process it, and visualize it in 3D, the final piece is understanding how the user interacts with this visualization. How do user clicks or input changes trigger updates to the map and the calculations? That's what we'll explore in the next chapter!
Next Chapter: User Interaction Handlers
Generated by AI Codebase Knowledge Builder