18 Mar 2024

Camera mapping between APS viewer and Revit - Part II, restore Revit camera on Viewer

In the previous blog post, "Camera mapping between APS viewer and Revit - Part I, restore viewer camera on Revit", we discussed how to restore the viewer camera back to Revit views. Now, let's take a look on how to restore Revit camera on the APS Viewer.

Before starting,  here is a safe harbor and disclaimer: "As we mentioned in the previous blog post, please be aware that the camera parameters in APS Viewer and Revit are different, so the mapping results would not be perfectly matched."

 

Ok, let's start from here now. The first step is following what my colleague Jeremy shared in his blog post "Revit Camera FOV". We leverage the power of the CustomExporter in Revit API to get Revit camera parameters, and then reuse the algorithm we shared previously for the zoom corners, but this time, we go in the reverse way. We use zoom corners to get the camera position and target. (This code implementation can be found at aps-viewer-revit-camera-sync/blob/main/APSRevitCameraSync/APSRevitCameraSync/CameraInfoExportContext.cs)

ApsViewerRevitCameraSync-Formula-Zoom-Corners

 

  • Here are the steps for the perspective camera:
    1. Get camera parameters by using ViewNode#GetCameraInfo() in the IExportContext#OnViewBegin.
    2. Get the following from CameraInfo:
      • FOV = 2 x atan( HorizontalExtent ) / ( 2 * TargetDistance ), if the camera is perspective mode. Otherwise, it's 0. (note. APS Viewer uses degree in units for FOV)
      • AspectRatio = HorizontalExtent / VerticalExtent
      • IsPerspective
    3. Get the following from View#GetViewOrientation3D():
      • ForwardDirection
      • UpDirection
    4. Calculate EyePosition and Target from UIView#GetZoomCorners() using the algorithm we shared in part I.
    5. Convert APS Viewer's camera state from the above:
      • eye = EyePosition
      • target = eye + (unit vector of the ForwardDirection) x abs(TargetDistance)
      • up = UpDirection
      • aspectRatio = AspectRatio
var view3d = view3dElem as View3D;
var cameraInfo = node.GetCameraInfo();
var isPerspective = cameraInfo.IsPerspective;

double fov = 0;
if (isPerspective)
{
    // https://thebuildingcoder.typepad.com/blog/2020/04/revit-camera-fov-forge-partner-talks-and-jobs.html
    fov = 2 * Math.Atan(cameraInfo.HorizontalExtent / (2 * cameraInfo.TargetDistance)) * 180 / Math.PI;
}

var aspect = cameraInfo.HorizontalExtent / cameraInfo.VerticalExtent;

var viewOrientation = view3d.GetOrientation();
var up = viewOrientation.UpDirection;
var eye = viewOrientation.EyePosition;
var forwardDirection = viewOrientation.ForwardDirection;
var rightDirection = forwardDirection.CrossProduct(up);

IList<UIView> views = this.uIDocument.GetOpenUIViews();
UIView currentView = views.FirstOrDefault(t => t.ViewId == this.viewId);

IList<XYZ> corners = currentView.GetZoomCorners();
XYZ corner1 = corners[0];
XYZ corner2 = corners[1];

// center of the UI view
double x = (corner1.X + corner2.X) / 2;
double y = (corner1.Y + corner2.Y) / 2;
double z = (corner1.Z + corner2.Z) / 2;
XYZ viewCenter = new XYZ(x, y, z);
XYZ target = viewCenter;

XYZ diagVector = corner1 - target;
double dist = corner1.DistanceTo(viewCenter);
var orthoHeight = dist * Math.Sin(diagVector.AngleTo(rightDirection)) * 2;

eye = target - forwardDirection * orthoHeight;
  • Here are the steps for the orthographic camera:
    1. Follow the steps for the perspective camera from step 1 to step 5.
    2. The `TargetDistance` we get from Revit API for the orthographic camera is near infinite, which is invalid in APS Viewer scene. So, here, we use ReferenceIntersector in Revit API to do ray casting to find the nearest point on the intersected object as the camera target and recalculate the camera position. Otherwise, the original calculated camera position and target are outside the scene's bounding box. The viewer will try to fix the camera info to prevent some issues with the orthographic camera in my experience.
var refIntersector = new ReferenceIntersector(view3d);
XYZ rayDirection = new XYZ(forwardDirection.X, forwardDirection.Y, forwardDirection.Z);
XYZ rayOrigin = new XYZ(viewCenter.X, viewCenter.Y, viewCenter.Z);
ReferenceWithContext referenceWithContext = refIntersector.FindNearest(rayOrigin, rayDirection);

Reference hitResult = referenceWithContext.GetReference();
if (hitResult != null)
{
    XYZ intersection = hitResult.GlobalPoint;
    target = intersection;
    eye = target - forwardDirection * orthoHeight;
}

Afterward, combine all calculated camera info from the above into JSON, and pass it to APS Viewer. (Note. In this sample, we pass the the camera data by calling the viewer sample app's `POST api/viewStates:sync` endpoint defined in aps-viewer-revit-camera-sync/blob/main/APSViewerApp/routes/viewStateSync.js by node.js.

var cameraDef = new WebViewerViewState
{
    Aspect = aspect,
    IsPerspective = isPerspective,
    FieldOfView = fov,
    Position = new double[] { eye.X, eye.Y, eye.Z },
    Target = new double[] { target.X, target.Y, target.Z },
    Up = new double[] { up.X, up.Y, up.Z },
    OrthoScale = orthoHeight
};

var data = JsonConvert.SerializeObject(cameraDef, new JsonSerializerSettings
        {
            ContractResolver = new CamelCasePropertyNamesContractResolver()
        };
);

var url = "http://localhost:8080/api/viewStates:sync";

try
{
    var client = new HttpClient();
    var content = new StringContent(data, Encoding.UTF8, "application/json");
    var result = client.PostAsync(url, content).Result;
}
catch (Exception ex)
{
    // Log
    System.Diagnostics.Trace.WriteLine(ex.Message);
}

 

The second part is about the viewer app side. In this blog post, we use a modified simple-viewer tutorial sample for the demo with the following changes:

 

For the implemented details, please refer to the above links for the source code. Here, we only highlight how we restore the camera state from Revit back to APS Viewer. And we map the camera position and target from Revit coordinate space to viewer space by multiplying the transform we get from `model.getModelToViewerTransform()` in the viewer.

const viewData = JSON.parse(data);
const view = {
    aspect: viewData.aspect,
    isPerspective: viewData.isPerspective,
    fov: viewData.fov,
    position: new THREE.Vector3().fromArray(viewData.position),
    target: new THREE.Vector3().fromArray(viewData.target),
    up: new THREE.Vector3().fromArray(viewData.up),
    orthoScale: viewData.orthoScale
};

let model = viewer.getAllModels()[0];
const offsetMatrix = model.getModelToViewerTransform();
view.position = view.position.applyMatrix4(offsetMatrix);
view.target = view.target.applyMatrix4(offsetMatrix);

viewer.impl.setViewFromCamera(view);

 

Here is the demo. You can find the full sample code at https://github.com/yiskang/aps-viewer-revit-camera-sync/.

 

 

Enjoy it!! 

Related Article