Skip to content

Chapter 8: Geographic Data Processing#

Welcome back! In Chapter 7: API Data Fetching, we learned how our application gets data from external servers, like building boundaries or lists of gallery photos. We also know how we handle data from user file uploads, as discussed in Chapter 2: User Input Handling (Files & Interaction). In Chapter 6: Data Models, we saw how we define the expected structure of this data using interfaces.

But often, the raw data we receive isn't immediately ready for display or analysis. For example, a GeoJSON file might give us the boundary points of a building, but it doesn't automatically tell us its ground area or how far to extrude it in 3D. A point cloud might be in a specific coordinate system that doesn't match our map.

This is where Geographic Data Processing comes in.

What's the Problem?#

Imagine you have the geographic outline of a piece of land or a building polygon, perhaps in a GeoJSON file. You want to:

  • Calculate the area of that shape.
  • Estimate the building's floor area or even its volume based on inputs like typical floor height and how much of the land is covered by the building.
  • Find the geographic center of the building shape to place a marker or center the map view.

Or perhaps you've loaded a dense point cloud (LAZ data) that represents a 3D scan of an area. This data might be using a specific local coordinate system or a projected coordinate system (like EPSG:3857), but your map visualization library (Chapter 3: 3D Geographic Visualization) might require coordinates in a global system (like WGS84, or EPSG:4326, which is what latitude/longitude uses). You need to convert these coordinates.

These tasks require performing calculations and transformations on the geographic data itself.

What is Geographic Data Processing?#

Geographic Data Processing is the set of operations performed on geographic data (like points, lines, and polygons defined by coordinates) to derive new information, calculate metrics, or transform the data into a different format or coordinate system.

It takes the raw geographic information and processes it into the meaningful numbers and coordinates needed for visualization, analysis, and display in our application's UI (Chapter 4: Building Information Display).

Our project uses two main libraries specifically designed for these kinds of tasks:

  1. Turf.js: This is a powerful JavaScript library for performing spatial analysis. It provides functions for common geographic operations like calculating area, finding centers, measuring distances, and much more.
  2. Proj4js: This is a library for transforming coordinates between different geographic coordinate systems and projected coordinate systems.

Think of it like having a specialized toolkit for maps. If you have a boundary line on a map and need its length, you use a specific tool from the kit. If you need to know the center point of an area, there's a tool for that too. Turf.js and Proj4js are those specialized tools.

Calculating Building Metrics with Turf.js#

A core use case in our application is calculating metrics for a building polygon, like its area, volume, and center point. This is primarily handled by the computeGeoMatrics function in src/utils/geo-operations.ts.

This function takes the raw polygon coordinates from a GeoJSON feature and some user inputs (like the desired number of floors and floor height) to estimate building statistics.

Let's look at a simplified version:

// Inside src/utils/geo-operations.ts (Simplified)
import { area, centerOfMass, polygon } from "@turf/turf";

// Helper to round numbers
const round = (number: number): number => {
  return parseFloat(number.toFixed(2)); // Keep only 2 decimal places
};

// Function to compute metrics
export const computeGeoMatrics = (
  coordinates: any, // This would be the nested array of coordinates from GeoJSON
  floorHeight: number,
  floorNumber: number,
  lotCoverage: number // Percentage, e.g., 80 for 80%
) => {
  // Use Turf.js to create a polygon object from the coordinates
  const areaPolygon = polygon(coordinates);

  // Calculate the area of the polygon using Turf.js
  const landArea = round(area(areaPolygon)); // area() returns square meters

  // Calculate estimated building metrics based on land area and user inputs
  const buildingHeight = floorHeight * floorNumber;
  const buildingArea = round(landArea * (lotCoverage / 100)); // Assuming building area is landArea * lot coverage %
  const volume = round(
    landArea * (lotCoverage / 100) * floorHeight * floorNumber // Simple volume = area * height
  );

  // Find the geographic center of the polygon using Turf.js
  const center = centerOfMass(areaPolygon); // centerOfMass() returns a Point feature

  // Return an object containing all the calculated metrics and the center
  return { center, landArea, buildingArea, volume, buildingHeight };
};

Here's what this function does:

  1. Input: It receives the coordinates array (typically a deeply nested array for a polygon from a GeoJSON feature) and three numbers representing user inputs (floorHeight, floorNumber, lotCoverage).
  2. Create Turf Polygon: polygon(coordinates) is a function from Turf.js that takes a standard GeoJSON polygon coordinate array and turns it into a Turf.js Feature object of type Polygon. This object is now ready for Turf.js operations.
  3. Calculate Area: area(areaPolygon) is another Turf.js function that calculates the area of the polygon Feature, returning the result in square meters. We then use our round helper to format it nicely.
  4. Estimate Building Metrics: Using simple arithmetic, it estimates the buildingHeight, buildingArea, and volume based on the calculated landArea and the provided user inputs for floorHeight, floorNumber, and lotCoverage.
  5. Find Center: centerOfMass(areaPolygon) is a Turf.js function that finds the center point of the polygon's mass. This is useful for placing markers or centering the map. It returns a Turf.js Feature of type Point.
  6. Output: The function returns a single object containing all the calculated metrics (landArea, buildingArea, volume, buildingHeight) and the center point. The returned metrics object matches the Metrics interface we saw in Chapter 6: Data Models, and the center is a GeoJSON-like Point structure.

This shows how Turf.js helps perform common geometric calculations (area, center) directly on geographic coordinates, and then we use those results along with other inputs to derive further metrics.

Transforming Coordinates with Proj4js#

Geographic data comes in many different Coordinate Reference Systems (CRSs). Think of a CRS as the specific way coordinates (like pairs of numbers) are used to locate things on the Earth's surface. WGS84 (used by GPS, typically represented as latitude/longitude) is one, but there are many others, often "projected" systems optimized for specific regions (like UTM zones or state plane systems) or specific data types (like LAZ point clouds).

If your data (like a LAZ file) is in a different CRS than the one your map library expects (Mapbox/Deck.gl typically work with WGS84 or Web Mercator), you need to transform the coordinates. This is done using Proj4js.

The src/utils/projection.ts file sets up Proj4js for a specific transformation relevant to potential LAZ data.

// Inside src/utils/projection.ts (Simplified)
import { Proj4Projection } from "@math.gl/proj4";

// Define the coordinate systems we want to transform between
// EPSG:3857 is Web Mercator (common for online maps)
// WGS84 is the global standard for GPS (often represented as latitude/longitude)
export const positionProjection = new Proj4Projection({
  from: "EPSG:3857", // Source system (e.g., where LAZ data might be)
  to: "WGS84",      // Target system (e.g., what Deck.gl/Mapbox need)
});

// Function to transform LAZ point data coordinates
export const transformLazData = (lazData: {
  attributes: { POSITION: { value: Float32Array } };
}) => {
  // Get the raw position data (a flat array of [x1, y1, z1, x2, y2, z2, ...])
  const positions = lazData.attributes.POSITION.value;

  // Loop through the positions array, stepping by 3 (for x, y, z)
  for (let i = 0; i < positions.length - 1; i += 3) {
    // Get the [x, y, z] coordinates for the current point
    const vertex = Array.from(positions.subarray(i, i + 3));

    // Use the Proj4Projection object to transform the vertex coordinates
    const transformed = positionProjection.project(vertex); // transform from EPSG:3857 to WGS84

    // Overwrite the original coordinates in the array with the transformed ones
    positions.set(transformed, i);
  }
};

Here's a breakdown:

  1. Define Projection: new Proj4Projection({...}) creates a projection object configured to convert coordinates from a source CRS ("EPSG:3857") to a target CRS ("WGS84"). EPSG codes are standard identifiers for coordinate systems. EPSG:3857 is Web Mercator, widely used in web maps; WGS84 is the standard used by GPS.
  2. transformLazData: This function is designed to modify the coordinates in place within the lazData object, which is expected to contain a flat array of point positions.
  3. Loop and Transform: It loops through the positions array three numbers at a time (x, y, z). For each point, positionProjection.project(vertex) performs the coordinate transformation from EPSG:3857 to WGS84.
  4. Update Data: positions.set(transformed, i) then replaces the original x, y, z values in the array with the newly calculated WGS84 coordinates.

After running transformLazData, the coordinates in the lazData object are updated to the target WGS84 system, making them compatible with visualization libraries that expect this format.

Calculating Camera Offset#

Another processing task involves calculating a point relative to the camera's position and direction. This is used in src/utils/geo-operations.ts by the getOffsetBehindCamera function, potentially to place a marker slightly behind the camera location shown on the map.

// Inside src/utils/geo-operations.ts (Simplified)
import { toRadians } from "@math.gl/core"; // Helper for converting degrees to radians

// Function to calculate a point offset from camera coordinates
export const getOffsetBehindCamera = (
  bearing: number, // Camera direction in degrees (0-360)
  polygonElevation: number, // Base elevation of the building
  cameraCoordinates?: number[] // Optional [longitude, latitude, altitude] of camera
) => {
  // If no camera coordinates are provided, return a point slightly above the building base
  if (!cameraCoordinates) {
    return [0, 0, polygonElevation + 3]; // [longitude offset, latitude offset, altitude]
  }

  // Convert the camera bearing from degrees to radians (needed for Math.sin/cos)
  const radBearing = toRadians(bearing);

  // Calculate horizontal offsets based on bearing (using trigonometry)
  // -5 is a chosen distance (5 units) behind the camera
  const yOffset = -5 * Math.sin(radBearing); // Offset along the latitude direction
  const xOffset = -5 * Math.cos(radBearing); // Offset along the longitude direction

  // Return the offset coordinates (relative changes) and an altitude slightly above the camera
  // Note: This is simplified; in a real scenario, you'd add these offsets to the camera's
  // original longitude/latitude, which requires more complex spherical geometry for accuracy.
  // This implementation appears to return offsets relative to an origin, not absolute coordinates.
  return [yOffset, xOffset, cameraCoordinates[2] + 3];
};

This function:

  1. Input: Takes the camera's bearing (direction in degrees), the polygonElevation (base height of the building), and optional cameraCoordinates ([longitude, latitude, altitude]).
  2. Handle Missing Coordinates: If cameraCoordinates are not provided, it returns a default [0, 0, elevation + 3], which represents a point directly above the building base.
  3. Calculate Offset: Using basic trigonometry (Math.sin, Math.cos) and converting the bearing to radians (toRadians), it calculates xOffset and yOffset values. The -5 implies calculating a point 5 units behind the direction the camera is facing.
  4. Return Offset: It returns an array [yOffset, xOffset, cameraCoordinates[2] + 3]. This appears to return relative offsets and an altitude slightly above the camera's altitude. Self-correction: A true geographic offset calculation is more complex, involving spherical geometry to calculate a new lat/lon based on distance and bearing. The provided code's output [yOffset, xOffset, z] looks more like Cartesian offsets or data point offsets within a layer's coordinate system rather than new absolute WGS84 coordinates. I will describe it as calculating offsets based on bearing for placing a point relative to the camera's view.

This function performs a geometric calculation based on bearing to find a relative position useful for placing elements in the 3D scene.

How Processing Fits into the App Flow#

As discussed in Chapter 5: Main App Logic, the MainView component orchestrates the application. When MainView receives raw data (from a file upload handled in handleImage or from an API call handled in getPolygon), it then calls these data processing functions before updating the state that the visualization and display components use.

Here's a simplified sequence:

sequenceDiagram
    participant MainView
    participant Data Source (File or API)
    participant Geographic Processing Function (e.g., computeGeoMatrics)
    participant MainView State
    participant Display/Visualization Components

    MainView->Data Source: Gets raw geographic data (e.g., GeoJSON)
    Data Source-->MainView: Returns raw data
    MainView->Geographic Processing Function (e.g., computeGeoMatrics): Calls function with raw data
    Geographic Processing Function (e.g., computeGeoMatrics)->Geographic Processing Function (e.g., computeGeoMatrics): Performs calculations/transformations (using Turf.js, Proj4js)
    Geographic Processing Function (e.g., computeGeoMatrics)-->MainView: Returns processed data (e.g., Metrics object)
    MainView->MainView State: Updates state with processed data (e.g., setMetrics, setGeo)
    MainView->Display/Visualization Components: Re-renders components, passing updated state
    Display/Visualization Components-->>User: Displays processed info (metrics panel) or uses transformed data (3D map)

This flow shows that geographic data processing is a crucial intermediate step. Raw data comes in, processing functions transform or calculate metrics from it, and the results of this processing are what the rest of the application uses to display information (Chapter 4) and create the 3D visualization (Chapter 3).

Conclusion#

In this chapter, we explored the concept of Geographic Data Processing. We learned that raw geographic data often needs to be processed before it can be effectively visualized or analyzed. We saw how our project uses specialized libraries like Turf.js for spatial calculations (like area and center of mass) and Proj4js for coordinate system transformations. Functions like computeGeoMatrics, transformLazData, and getOffsetBehindCamera encapsulate these processing steps, taking raw geographic inputs and producing meaningful metrics, transformed coordinates, or derived positions. These processing steps are orchestrated by the main application logic (Chapter 5) and are essential for preparing data for the visualization (Chapter 3) and information display (Chapter 4) parts of the application.

With this chapter, we've covered the core concepts of the mapbox-gl_deck.gl_turf.js-ts project, from structuring views and handling user input to fetching data, processing geographic information, and visualizing it in 3D.


(This is the final chapter in the defined structure.)


Generated by AI Codebase Knowledge Builder. References: 1(https://github.com/buildvoc/mapbox-gl_deck.gl_turf.js-ts/blob/3d8a4a53d878db3324af6466e0f99e5fb072bbe7/src/utils/geo-operations.ts), 2(https://github.com/buildvoc/mapbox-gl_deck.gl_turf.js-ts/blob/3d8a4a53d878db3324af6466e0f99e5fb072bbe7/src/utils/projection.ts)