New views can be created by splitting the
view frame using the SplitView controls at the top-right corner of the
view frame. Splitting a view divides the view into two equal parts, either
vertically or horizontally, based on the button used for the split.
On splitting a view, an empty frame with buttons for all known types
of views is shown. Simply click on one of those buttons to create a new view of
a chosen type.
You can make the views of the active layout fullscreen by using View > Fullscreen (layout) (or using the F11 key).
You can also make the active view alone fullscreen by using View > Fullscreen (active view) (or using CTRL + F11 keys).
To return back to the normal mode, use the Esc key.
# Create a view>>>view1=CreateRenderView()# Create a second view>>>view2=CreateRenderView()# Check if view2 is the active view>>>view2==GetActiveView()True# Make view1 active>>>SetActiveView(view1)>>>view1==GetActiveView()True
# To get exisiting tabs/layouts>>>layouts=GetLayouts()>>>print(layouts){('ViewLayout1','264'):<paraview.servermanager.ViewLayoutobjectat0x2e5b7d0>}# To get layout corresponding to a particular view>>>print(GetLayout(view))<paraview.servermanager.ViewLayoutobjectat0x2e5b7d0># If view is not specified, active view is used>>>print(GetLayout())<paraview.servermanager.ViewLayoutobjectat0x2e5b7d0># To create a new tab>>>new_layout=servermanager.misc.ViewLayout(registrationGroup="layouts")# To split the cell containing the view, either horizontally or vertically>>>view=GetActiveView()>>>layout=GetLayout(view)# fraction is optional, if not specified the frame is split evenly.>>>locationId=layout.SplitViewVertical(view=view,fraction=0.5)# To get the location of the layout>>>locationId=layout.GetViewLocation(view)# To change the split factor>>>layout.SetSplitFraction(locationId,0.75)# To Maximize a particular view>>>layout.MaximizeCell(locationId)# To assign a view to a particular cell.>>>view2=CreateRenderView()>>>layout.AssignView(locationId,view2)
Since the visualization process in general focuses on reducing data to
generate visual representations, the rendering (broadly speaking) is less time-intensive
than the actual data processing. Thus, changing properties that affect
rendering are not as compute-intensive as transforming the data itself. For example,
changing the color on a surface mesh is not as expensive as generating the mesh
in the first place. Hence, the need to Apply such properties becomes less
relevant. At the same time, when changing display properties such as opacity,
you may want to see the result as you change the property to decide on the final
value. Hence, it is desirable to see the updates immediately.
# 1. Save reference when a view is created>>>view=CreateView("RenderView")# 2. Get reference to the active view.>>>view=GetActiveView()
ビューで使用できるプロパティは、ビューのタイプによって異なります。 help 関数を使用すると、使用可能なプロパティを見つけることができます。
>>> view=CreateRenderView()>>> help(view) Help on RenderView in module paraview.servermanager object:class RenderView(Proxy) | View proxy for a 3D interactive render | view. | | ---------------------------------------------------------------------- | Data descriptors defined here: | | CenterAxesVisibility | Toggle the visibility of the axes showing the center of | rotation in the scene. | | CenterOfRotation | Center of rotation for the interactor. | ...# Once you have a reference to the view, you can then get/set the properties.# Get the current value>>> print(view.CenterAxesVisibility)1# Change the value>>> view.CenterAxesVisibility=0
# Using SetDisplayProperties/GetDisplayProperties to access the display# properties for the active source in the active view.>>>print(GetDisplayProperties("Opacity"))1.0>>>SetDisplayProperties(Opacity=0.5)
# Get display properties object for the active source in the active view.>>>disp=GetDisplayProperties()# Get the available representation types. Available is an option for all properties that have a Domain.>>>disp.GetProperty("Representation").Available['Outline','Points','Wireframe','Surface','Surface With Edges']# You can also save the object returned by Show.>>>disp=Show()# Now, you can directly access the properties.>>>print(disp.Opacity)0.5>>>disp.Opacity=0.75
help メソッドを使用して、表示オブジェクトで使用可能なプロパティを検出できます。
>>> disp=Show()>>> help(disp)>>> help(a)Help on GeometryRepresentation in module paraview.servermanager object:class GeometryRepresentation(SourceProxy) | ParaView`s default representation for showing any type of | dataset in the render view. | | Method resolution order: | GeometryRepresentation | SourceProxy | Proxy | __builtin__.object | | ---------------------------------------------------------------------- | Data descriptors defined here: | | ... | | CenterStickyAxes | Keep the sticky axes centered in the view window. | | ColorArrayName | Set the array name to color by. Set it to empty string | to use solid color. | | ColorAttributeType | ...
The RenderView is the most commonly used view in ParaView. It is used to render
geometries and volumes in a 3D scene. This is the view that you typically think
of when referring to 3D visualization. The view relies on techniques to map data
to graphics primitives such as triangles, polygons, and voxels, and it renders
them in a scene.
Most of the scientific datasets discussed in 3.1 章
are composed of meshes. These meshes can be mapped to graphics primitives using
several of the established visualization techniques. That is, you can compute the
outer surface of these meshes and then render that surface as filled polygons, you can
just render the edges, or you can render the data as a nebulous blob to get a better
understanding of the internal structure in the dataset. Plugins, like
DigitalRockPhysics, can provide additional ways of rendering data using advanced
techniques that provide more insight into the data.
Unless you changed the default setting, a new RenderView will be created
when paraview starts up or connects to a new server. To create a
RenderView in paraview, split or close a view, and select the
RenderView button. You can also convert a view to a RenderView (or any other
type) by right-clicking on the view's title bar and picking from the ConvertTo sub-menu. It simply closes the chosen view and creates a selected view type
in its place.
通常、ParaView では、3Dシーンを操作することになります。しかし、スライス平面や2D画像のような2Dデータセットを操作する場合もあります。そのような場合、paraview は2Dインタラクションに適したインタラクションオプションのセットを別に提供します。ビューツールバーの 2D または 3D ボタンをクリックすると、デフォルトの3Dインタラクションオプションと2Dインタラクションオプションを切り替えることができます。2Dインタラクションのデフォルトのインタラクションオプションは、以下の通りです。
Modifier
Left Button
Middle Button
Right Button
Pan
Roll
Zoom
⇧
Zoom
Zoom
Zoom To Mouse
CTRL or ⌘
Roll
Pan
Rotate
デフォルトでは、ParaView はデータを読み込む際に2Dか3Dかを判断し、それに応じてインタラクションモードを設定します。この動作は Settings ダイアログの RenderView タブにある DefaultInteractionMode 設定を変更することで行うことができます。デフォルトは "Automatic, based on the first time step" ですが、強制的にインタラクションモードを変更したい場合は、"Always 2D" または "Always 3D" に設定を変更することが可能です。
You can also change the Background used for this view. You can either set it as a
Single color or as a Gradient changing between two colors, or you can select an
Image (or texture) to use as the background.
Lastly, the SurfaceLIC representation is available for surface datasets with
vector point data arrays. LIC stands for line integral convolution, which is a visualization
technique that shows the direction of flow as a noise pattern smeared in the
direction of flow.
図 4.6 An example of the SurfaceLIC representation showing the direction of a
vector data array and colored by a different scalar array showing (Density).
If instead you want to pseudocolor using an attribute array
available on the dataset, select that array name from the combo-box. For
multi-component arrays, you can pick a particular component or Magnitude to
use for scalar coloring. ParaView will automatically set up a color transfer
function it will use to map the data array to colors. The default range for the
transfer function is set up based on the TransferFunctionResetMode general
setting in the Settings dialog when the transfer function is first created.
If another dataset is later colored by a data array with the same name, the range
of the transfer function will be updated according to the AutomaticRescaleRangeMode
property in the ColorMapEditor . To reset the transfer function range to the
range of the data array in the selected dataset, you can use the Rescale
button. Remember that, despite the fact that you can set the scalar array with
which to color when rendering as Outline , the outline itself continues to use
the specified solid color.
ScalarColoring properties are only relevant when you have selected a data
array with which to pseudocolor. The MapScalars checkbox affects whether a color
transfer function should be used (図 4.7).
If unchecked, and the data array can directly
be interpreted as colors, then those colors are used directly. If not, the color
transfer function will be used. A data array can be interpreted as colors if, and
only if, it is an unsigned char, float, or double array with two, three, or four
components. If the data array is unsigned char, the color values are defined between
0 and 255 while if the data array is float or double, the color values are expected
to be between 0 and 1. InterpolateScalarsBeforeMapping controls how color
interpolation happens across rendered polygons. If
on, scalars will be interpolated within polygons, and color mapping will occur
on a per-pixel basis. If off, color mapping occurs at polygon points, and colors
are interpolated, which is generally less accurate. Refer to the Kitware blog
[PatMarion] for a detailed explanation of this option.
UseNanColorForMissingArrays is a property that, if enabled, will use the special
color designated for NaN values in a dataset to also be used as the color for parts of
a composite dataset that are missing the scalars array used for color mapping.
The CoordinateShiftScaleMethod is used to choose how to normalize point coordinates
to improve rendering quality. Mesh points are sent to the GPU as single-precision float data
which can result in resolution issues due to limited precision. VTK includes a variety of
methods to normalize the point coordinates to a better range for single-precision floats
prior to sending them to the GPU. AutoShiftScale is a good setting that should work
for most datasets - it recomputes a shift and scale factor according to a heuristic involving
dataset size and position relative to the origin. AlwaysAutoShiftScale recomputes the
shift and scale every time. AutoShiftOnly only shifts the data - this is useful when
data is far away from the origin. NearFocalPlaneShiftScale and FocalPointShiftScale
works based on the current camera near clipping point and viewpoint, respectively. This makes
it the most robust setting, especially for very large datasets, but it will renormalize the
points occasionally as the camera's settings change. Renormalizing points requires reuploading
the data to the GPU, so there may be a performance cost with these last methods.
The property UseShaderReplacements enables you to customize the shader code
VTK uses for rendering by specifying shader replacements with a JSON string.
The JSON string can be a single node or an array of nodes with the following properties:
"type": specifies the type of shader the replacement is about.
It can be either "vertex", "fragment" or "geometry".
"original": specifies the original string to be replaced in the shader code.
This string is generally a pattern defined by the mapper
vtkOpenGLPolyDataMapper at specific locations of the shader
GLSL source code.
"replacement": specifies the replacement string in GLSL source code.
Note that the Json parser supports multiple lines entries.
Here's an example of a simple shader replacement (draw all the fragments in full red
color without any shading consideration):
The NonlinearSubdivisionLevel property is used when rendering datasets with higher-
order elements. Use this to set the subdivision level for triangulating higher
order elements. The higher the value, the smoother the edges. This comes at the
cost of more triangles and, hence, potentially, increased rendering time.
The BlockColorsDistinctValues property sets the number
of unique colors to use when coloring multiblock datasets by block ID. Finally,
UseDataPartitions controls whether data is redistributed when it is
rendered translucently. When off (default value), data is repartitioned by the
compositing algorithm prior to rendering. This is typically an expensive
operation that slows down rendering. When this option is on, the existing data
partitions are used, and the cost of data restribution is avoided. However, if
the partitions are not sortable in back-to-front order, rendering artifacts may
occur.
>>> fromparaview.simpleimport*>>> view=CreateRenderView()# Alternatively, use CreateView.>>> view=CreateView("RenderView")
noindent Show および Hide を使用して、パイプラインモジュールによって生成されたデータをビューで表示または非表示にできます。
>>> source=Sphere()>>> view=CreateRenderView()# Show active source in active view.>>> Show()# Or specify source and view explicitly.>>> Show(source,view)# Hide source in active view.>>> Hide(source)
# Get camera from the active view, if possible.>>>camera=GetActiveCamera()# or, get the camera from a specific render view.>>>camera=view.GetActiveCamera()# Now, you can use methods on camera to move it around the scene.# Divide the camera's distance from the focal point by the given dolly value.# Use a value greater than one to dolly-in toward the focal point, and use a# value less than one to dolly-out away from the focal point.>>>camera.Dolly(10)# Set the roll angle of the camera about the direction of projection.>>>camera.Roll(30)# Rotate the camera about the view up vector centered at the focal point. Note# that the view up vector is whatever was set via SetViewUp, and is not# necessarily perpendicular to the direction of projection. The result is a# horizontal rotation of the camera.>>>camera.Azimuth(30)# Rotate the focal point about the view up vector, using the camera's position# as the center of rotation. Note that the view up vector is whatever was set# via SetViewUp, and is not necessarily perpendicular to the direction of# projection. The result is a horizontal rotation of the scene.>>>camera.Yaw(10)# Rotate the camera about the cross product of the negative of the direction# of projection and the view up vector, using the focal point as the center# of rotation. The result is a vertical rotation of the scene.>>>camera.Elevation(10)# Rotate the focal point about the cross product of the view up vector and the# direction of projection, using the camera's position as the center of# rotation. The result is a vertical rotation of the camera.>>>camera.Pitch(10)
>>> camera.SetFocalPoint(0,0,0)>>> camera.SetPosition(0,0,-10)>>> camera.SetViewUp(0,1,0)>>> camera.SetViewAngle(30)>>> camera.SetParallelProjection(False)# If ParallelProjection is set to True, then you'll need# to specify parallel scalar as well i.e. the height of the viewport in# world-coordinate distances. The default is 1. Note that the `scale'# parameter works as an `inverse scale' where larger numbers produce smaller# images. This method has no effect in perspective projection mode.>>> camera.SetParallelScale(1)
>>> view=GetActiveView()# Set center axis visibility>>> view.CenterAxesVisibility=0# Or you can use this variant to set the property on the active view.>>> SetViewProperties(CenterAxesVisibility=0)# Another way of doing the same>>> SetViewProperties(view,CenterAxesVisibility=0)# Similarly, you can change orientation axes related properties>>> view.OrientationAxesVisibility=0>>> view.OrientationAxesLabelColor=(1,1,1)
>>> displayProperties=GetDisplayProperties(source,view)# Both source and view are optional. If not specified, the active source# and active view will be used.# Now one can change properties on this object>>> displayProperties.Representation="Outline"# Or use the SetDisplayProperties API.>>> SetDisplayProperties(source,view,Representation=Outline)# Here too, source and view are optional and when not specified,# active source and active view will be used.
help 関数を使用すると、表示プロパティオブジェクトで使用可能なプロパティに関する情報をいつでも取得できます。
Display properties allow you to setup which series or data arrays are plotted in
this view. You start by picking the AttributeType . Select the attribute
type that has the arrays of interest. For example, if you are plotting arrays
associated with points, then you should pick PointData .) Arrays with
different associations cannot be plotted together. You may need to apply filters
such as CellDatatoPointData or PointDatatoCellData to convert
arrays between different associations to do that.
SeriesParameters control series or data arrays plotted on the Y-axis. All
available data arrays are lists in the table widget that allows you to
check/uncheck a series to plot in the first column. The second column in the
table shows the associated color used to plot that series. You can double-click
the color swatch to change the color to use. By default, ParaView will try to
pick a palette of discrete colors. The third column lets you set the
opacity of the series plot elements. The fourth column (Variable) shows the
name of the variable to plot. The fifth column (LegendName) shows the label to use for
that series in the legend. By default, it is set to be the same as the array
name. You can double-click to change the name to your choice, e.g., to add units.
Other series parameters include LineThickness, LineStyle, MarkerStyle, and MarkerSize. To change any of these, highlight a row in The
SeriesParameters widget, and then change the associated parameter to affect
the highlighted series. You can change properties for multiple series and can select
multiple of them by using the CTRL (or ⌘) and ⇧ keys.
>>> fromparaview.simpleimport*# Create a data source to probe into.>>> Wavelet()<paraview.servermanager.Wavelet object at 0x1156fd810># We update the source so that when we create PlotOverLine filter# it has input data available to determine good defaults. Otherwise,# we will have to manually set up the defaults.>>> UpdatePipeline()# Now, create the PlotOverLine filter. It will be initialized using# defaults based on the input data.>>> PlotOverLine()<paraview.servermanager.PlotOverLine object at 0x1156fd490># Show the result.>>> Show()<paraview.servermanager.XYChartRepresentation object at 0x1160a6a10># This will automatically create a new Line Chart View if the# the active view is no a Line Chart View since PlotOverLine# filter indicates it as the preferred view. You can also explicitly# create it by using CreateView() function.# Display the result.>>> Render()# Access display properties object.>>> dp=GetDisplayProperties()>>> print(dp.SeriesVisibility)['arc_length', '0', 'RTData', '1']# This is list with key-value pairs where the first item is the name# of the series, then its visibility and so on.# To toggle visibility, change this list e.g.>>> dp.SeriesVisibility=['arc_length','1','RTData','1']# Same is true for other series parameters including series color,# line thickness etc.# For series color, the value consists of 3 values: red, green, and blue# color components.>>> print(dp.SeriesColor)['arc_length', '0', '0', '0', 'RTData', '0.89', '0.1', '0.11']# For series labels, value is the label to use.>>> print(dp.SeriesLabel)['arc_length', 'arc_length', 'RTData', 'RTData']# e.g. to change RTData's legend label, we can do something as follows:>>> dp.SeriesLabel[3]='RTData -- new label'# Access view properties object.>>> view=GetActiveView()# or>>> view=GetViewProperties()# To change titles>>> view.ChartTitle="My Title">>> view.BottomAxisTitle="X Axis">>> view.LeftAxisTitle="Y Axis"
# To create a slice view in use:>>>view=CreateView("MultiSlice")# Use properties on view to set/get the slice offsets.>>>view.XSliceValues=[-10,0,10]>>>print(view.XSliceValues)[-10,0,10]# Similar to XSliceValues, you have YSliceValues and ZSliceValues.>>>view.YSliceValues=[0]>>>view.ZSliceValues=[]
defsetup_data(view):# Iterate over visible data objectsforiinrange(view.GetNumberOfVisibleDataObjects()):# You need to use GetVisibleDataObjectForSetup(i)# in setup_data to access the data object.dataObject=view.GetVisibleDataObjectForSetup(i)# The data object has the same data type and structure# as the data object that sits on the server. You can# query the size of the data, for instance, or do anything# else you can do through the Python wrapping.print('Memory size: {0} kilobytes'.format(dataObject.GetActualMemorySize()))# Clean up from previous calls here. We want to unset# any of the arrays requested in previous calls to this function.view.DisableAllAttributeArrays()# By default, no arrays will be passed to the client.# You need to explicitly request the arrays you want.# Here, we'll request the Density point data arrayview.SetAttributeArrayStatus(i,vtkDataObject.POINT,"Density",1)view.SetAttributeArrayStatus(i,vtkDataObject.POINT,"Momentum",1)# Other attribute arrays can be set similarlyview.SetAttributeArrayStatus(i,vtkDataObject.FIELD,"fieldData",1)
GetVisibleDataObjectForSetup(visibleObjectIndex) -
This returns the visibleObjectIndex'th visible data object in
the view. (The data object will have an open eye next to it in the
PipelineBrowser .)
defrender(view,width,height):figure=python_view.matplotlib_figure(width,height)ax=figure.add_subplot(1,1,1)ax.minorticks_on()ax.set_title('Plot title')ax.set_xlabel('X label')ax.set_ylabel('Y label')# Process only the first visible object in the pipeline browserdataObject=view.GetVisibleDataObjectForRendering(0)x=dataObject.GetPointData().GetArray('X')# Convert VTK data array to numpy array for plottingfromparaview.numpy_supportimportvtk_to_numpynp_x=vtk_to_numpy(x)ax.hist(np_x,bins=10)returnpython_view.figure_to_image(figure)
This definition of the render(view,width,height) function
creates a histogram of a point data array named X from the first
visible object in the PipelineBrowser . Note the conversion
function, python_view.figure_to_image(figure) , in the last line.
This converts the matplotlib Figure object created
with python_view.matplotlib_figure(width,height) into a
vtkImageData object suitable for display in the viewport.
Trigger actions are assigned to the right trigger by default and include grabbing,
picking, probing, interactive clipping, teleportation, and adding points to sources
(such as a polyline source). The current action can be chosen via the XR menu
(see 4.14.3 章).
Adding points to a source --- 右のトリガーを押すと、右のコントローラーの先端に点が配置されます。ポリラインソースなど、アクティブソースが点の配置を許可している場合のみ有効です。
Pipeline Browser --- This is the same PipelineBrowser present in ParaView.
The visibility for each item in the pipeline can be modified by pointing the
navigation ray on the eye icon and pressing the right trigger.
Panels --- VR options are distributed into 4 panels, that can be displayed by
clicking on the corresponding tab:
Interaction --- This panel contains options related to the interactions with
the scene using the controllers (see 4.14.3.1 章).
Movement --- This panel contains options related to the camera movement and
poses (see 4.14.3.2 章).
Environment --- This panel contains global options related to the scene
(see 4.14.3.3 章).
Widgets --- This panel contains options related to VR-specific widgets
(see 4.14.3.4 章).
Exit XR --- This button closes the current XR View.
Animation Buttons --- These buttons are used to navigate timesteps for
temporal datasets.
Clear --- This button clears all previously saved camera poses.
Savepose --- This button saves the current pose in the list of saved
camera poses. Up to 6 poses can be saved this way. For each saved pose, a
dedicated button is added to the right of this button.
図 4.32 Environment panel of the XR integrated menu.
ViewUpDirection --- These buttons set which axis points upwards from
the top of the HMD. This is useful when datasets or skyboxes are oriented
differently from the default.
SceneScale --- These buttons change the scaling factor of the scene.
A higher value results in all objects appearing larger.
ShowFloor --- This button allows hiding or showing the floor as a white plane.
DistanceWidget --- This button adds a measuring tool to the scene.
Press the right trigger once to place the starting point where the right
controller is located, then press a second time with the controller at the
desired location to place the second point. Four values are displayed next
to the tool: distance and X, Y, Z difference between both points.
The tips of the line can be grabbed and moved individually after placing them.
Cropping buttons --- The following buttons provide tools to crop data in real time.
Cropping planes can be moved by placing the right controller on them and grabbing
them with the right trigger. More than one plane can be added to the scene.
AddCropPlane --- シーンに切り出し平面を追加するボタンです。
AddThickCrop --- シーンに厚みを持つクロップ面を追加するボタンです。
HideCropPlanes --- This button hides all cropping planes in the scene.
CropThickness --- This horizontal slider sets the thickness of created
thick cropping planes (this parameter does not affect current ones). By default,
the value is set to auto, which adjusts the plane thickness according to the
current scene scale.
SnapCropPlanes --- This button allows to choose whether the cropping
planes should snap to the coordinates axes.
The remoting feature is only available on Windows for the Hololens 2 and requires an additional
package named Microsoft.Holographic.Remoting.OpenXr. With this, ParaView can connect to
another application in the remote device if both applications use the same version
of this package.
Note that the ParaView release uses the same version as the official player application developed
by Microsoft, available in the Microsoft Store, which is version 2.9.2.
If you do have not an application already deployed in the remote device, we recommend downloading the
Holographic Remoting Player application in the Microsoft Store.
First, start the application on the remote device.
After launching this application, it will wait for another application to connect to it via an IP address.
図 4.36 Remote application awaiting connection in the Hololens 2.
You can now start ParaView and do any process on your data that you want. When you are ready to test it in
the Hololens 2, enable the XR Interface plugin. You will need to set different options:
DesiredXRRuntime --- set it to OpenXR because the Microsoft.Holographic.Remoting.OpenXr depends on it.
UseOpenXRRemoting --- enable or disable the remoting support.
Remoteaddress --- set the IP address to connect ParaView and the application in the Hololens 2.
図 4.37 XRInterface panel with OpenXR Remoting options.
After setting these options, you can click on SendToXR. Once the connection is established, you will be able
to see and interact with your dataset.
CAVE support in ParaView is provided through the CAVEInteraction plugin
(which was at one time called VRPlugin). To load this plugin, open the
PluginManager via Tools > Manage Plugins.... Click on
CAVEInteraction, then on the button LoadSelected. This will open the
CAVE Interaction Manager panel.
By default, the CAVEInteractionManager panel appears on the lower left upon loading the plugin.
To open it manually, search for the corresponding checkbox in the main menu via
View > CAVE Interaction Manager.
At the highest level, using the plugin is just a matter of configuring
VR events and interactions (and likely saving it all to state for easy and
quick re-constitution later), and then clicking the "Start" button to
start the VR events streaming into your interactor styles.
But as you might guess from looking at the panel above, there are a couple
of concepts to understand about CAVE Interactions first: VR Connections and
Interactions.
Interacting with your CAVE through ParaView involves the following:
Configure events using the "VR Connections" section
Add one or more interactor styles using the "Interactions" section
Click the Start button to begin interacting
Click the Stop button to stop interacting
Interactor styles are quite flexible in allowing you to manipulate ParaView
proxies and their associated properties, but one main goal of interactor
styles is to allow you to navigate around your dataset. You can find more
information on navigation, events, and interactor styles in the sections
below.
A CAVE can help you explore your data by allowing you to move around within
your physical space, looking at the data from different positions and
angles to gain a better understanding. When the scale of the data exceeds
the size of your physical space, the ability to navigate becomes important.
One of the main goals of the CAVEInteraction plugin is to support this kind
of navigation.
For this purpose, the base class of all CAVE interactors provides methods to
get and set the navigation matrix, and these should be used to navigate in
the CAVE. The suggested approach is to:
access the current nagivation matrix using the provided GetNavigationMatrix method
multiply it on the left with the next desired navigation transformation
update the current navigation matrix with SetNavigationMatrix.
The ParaView source code repository contains an
example
of this approach in action.
There are two main coordinate systems supported in CAVES, termed "Fixed"
and "Navigable". Any pipeline objects can be placed in either coordinate
system by toggling advanced properties and/or searching in the properties
panel for "coordinate system".
図 4.40 Choosing a coordinate system for representations.
Any pipeline objects placed in the Fixed coordinate system will not
move as a result of navigation, while objects placed in the Navigable
coordinate system will be transformed by the navigation matrix.
For example, let's say you wish to place two "screen bumpers" in your
scene to mark where your physical displays are located. This could help
you avoid accidentally walking into the screens while immersed in your data
inside your CAVE.
To accomplish this, you could create two "Box" sources, and use the screen
coordinates from your .pvx file to compute the Center and X/Y/ZLength
properties of these boxes so that they share the same length as the front
and side walls, but have small, fixed width and depth (so they don't
obscure too much of the data you want to see).
Once these are sized and positioned to your satisfaction, you would set
their coordinate system to Fixed. Later, when loading your data, you
don't have to do anything special, as the default coordinate system is
Navigable, which is likely what you want.
Now, when you call the SetNavigationMatrix in your interactor style
proxies, the screen bumpers will remain fixed in place, adjacent to your
front and side walls, while you, for example, fly through the rest of
your data.
Two more important concepts in the CAVEInteraction plugin are so-called
connections and interactions. At a high level, connections represent
the events that might be generated by your devices, while interactions
(also sometimes referred to as "interactors" or "interactor style proxies")
represent the actions you take in response to those events. Let's start
by taking a closer look at connections.
You can add any number of VR Connections by clicking the Add button
below the list of connections in the VRConnections section of the panel.
Alternatively, if you select an item in that list, you can either Edit
or Remove that item.
To add a connection you must first choose whether the source is a VRPN or VRUI
event source (if you built ParaView yourself and don't see both of those
options, check your build configuration and make sure they're enabled). Then
you can assign a unique name and specify the network address of the event
source. Next, you must define one or more events you expect to receive from
the connection. To define an event, first select the event type from the
left-most combo box, then type in the event ID number (you get these from
your tracking system), and lastly provide a name to associate with the event.
Once you have filled in all three values, click the "plus" button to create
the event definition. See the image below for some examples.
To remove any event from the configuration, simply select the associated row
and click the "minus" button. Event definitions cannot be edited, instead you
must remove the event you wish to edit, and then add it again.
The screenshot above shows a single VRPN connection with 9 events defined
corresponding to all the inputs available on a standard HTC Vive controller.
The trigger is defined not only as a valuator (which indicates how much the
trigger is squeezed), but also as a button (which only fires once the trigger
has been squeezed to the maximum extent, where there is a small haptic response
to help indicate a "click"). Similarly, the trackpad has been associated with
two valuator events (one for the amount of displacement in the y direction, and
one for the x displacement), as well as a button click for when you press it
a little harder. The controller's orientation is defined as a "Tracker" event,
and events are defined for all the other buttons available on the device.
Once you have defined events via the "VR Connections" you can use them to
configure interactions. Interactions are typically defined in C++ code as
subclasses of the vtkSMVRInteractorStyleProxy class. Most interactor
styles are defined to operate on a single property of a selected proxy, and
the specific proxy and property are left to the user to choose at runtime.
Besides that, an interactor style has a few responsibilities:
Declare all event types it needs to operate, giving each event type a useful name (known as a "Role").
Declare the number of elements it can handle updating. For example a style could update a 3-element color or position property, or it could update a 16-element matrix property.
Implement one or more of the handler methods defined by the base class.
To declare needed event types, an interactor style should make calls like
the following in its constructor:
The CAVEInteraction plugin defines a set of interactor styles that can be
used out of the box, each of which follows the guidelines above. These
built-in styles are available to select from the left-most combo box in
the Interactions section of the panel:
Once you have selected an interactor style from the list, you can then
select the proxy and property upon which it should act, using the remaining
two combo boxes to the right. If the interactor style you chose overrides
the GetControlledPropertySize() method, choosing a proxy from the
center combo will constrain the properties in the right-most combo so
that only the proxy's properties of that length are shown. Otherwise
all of the proxy's properties are shown in the right-most combo. Once
you have set all three the way you want, click the "Add" button. This
will bring up a dialog allowing you to define the mapping from your
pool of defined events to the "Roles" defined by the interactor style:
The image above shows what the dialog looks like when the interactor
style has only defined one named tracker role, "Tracker". As you can
see, the combo box was automatically filled with one of the events of
type "tracker" defined earlier in the VRConnections section of the
panel. Clicking "Ok" on the dialog accepts the chosen mapping of event
to named role.
The "Python" interactor style is a recent addition to the built-in
collection which gives you complete freedom and power to define your
own interactions in Python.
To use this type of interactor style, simply select "Python" in the
left-most combo box in the Interactions section of the panel.
When you do this, you can just ignore the other two drop-downs, as
your Python code will have the ability to update any number of proxies
and properties. Once you click the "Add" button and your "Python"
interactor style appears in the list, select it, and its "File Name"
property editor will appear in the UI, allowing you to select the
Python file to use:
Any time you change the file, and also any time you click the
"Refresh" button in the "File Name" property editor, the selected
file will be re-read, and the "Add VR Interaction" dialog will
reappear, allowing you to update the event/role bindings.
To define a custom Python interactor, you have a few responsibilities:
In your Python file, define a method called create_interactor_style() that creates and returns an instance of your class
In your interactor style Python class, define an Initialize(self,vtkSelf) method where you declare the event types of interest
In your interactor style Python class, define your tracker, button, and valuator handler methods
You can also define a constructor for your class, and any instance
methods or module-level methods you wish. Your handler methods are
then called with the latest event data so you can do whatever your
imagination desires, all of ParaView proxies are at your fingertips
to use as you see fit.
The paraview source repository has an example custom Python
interactor style you can use as a model.
To achieve head tracking in your custom Python interactor, you only
need to update the EyeTransformMatrix available on the active render
view proxy (accessible via GetActiveView()). To perform any kind
of transformation on the entire scene, hereafter referred to as
"navigation", simply call the SetNavigationMatrix method which
is defined on the vtkSMVRInteractorStyleProxy class. In C++ interactor
styles you can access this method on yourself (via the this pointer),
while in Python interactor styles, you can call it on the vtkSelf
argument pass to all methods (see the example linked above). In Python
interactor styles, you are also free to target properties on any pipeline
source/filter proxies in your custom interactor style, as well as properties
on paraview representation or view proxies.
If you downloaded a binary version of ParaView from the website, then it
is already set to collaborate with other immersive ParaView users around
the world. Similar to functionality in the XRInterface plugin, the
CAVEInteraction plugin supports collaboration via a publicly available
collaboration server, using a simple protocol.
Note that when using the collaboration feature, users of both plugins
can collaborate with each other seamlessly, though the XRInterface
plugin currently supports more features than the CAVEInteraction
plugin.
In this context collaboration means joining a session with other ParaView
users, and getting real-time information about their head/hand positions,
as well as possibly information about where they have navigated in the
navigable world. Some things to keep in mind when collaborating with
other users:
Each user must load the same data in order to see the same thing
Pipeline state is not shared with collaborators
When collaborating with others, you will see an animated avatar
representing each connected user, and each avatar will be a unique
color (chosen at randrom from a small set of neutral colors) and
have a billboard with their name hovering above their head.
When collaboration is enabled, the CAVEInteraction plugin panel will
include a section dedicated to configuring your connection to the
collaboration server.
The first thing to note about the collaboration panel is the checkbox
that enables/disables collaboration. In the CAVEInteraction plugin,
collaboration is handled by the existing event loop, and as such, you
begin and end collaborating using the normal Start and Stop
buttons described above (assuming the Collaboration checkbox is
checked). This is different from the XRInterface plugin, where you
must take separate action to connect to the collaboration server when
you are already interacting with your data in XR.
In addition to enabling/disabling collaboration, the panel also allows
you specify the hostname (or IP address) and port of the collaboration
server, the name of a session to join, and your own name (visible to
collaborators on a billboard over your avatars head). Once this
information is entered and you click Start, you will be collaborating
with all users who connect to the same server/port and specify the same
session.
Another property you can configure in the panel is the default avatar up
vector. In ParaView the default camera is oriented with the positive Y
axis pointing up, the positive X axis pointing to your right, and the
positive Z axis pointing out of the screen at you. If your data is
oriented in such a way that this orientation makes sense, you can leave
the default avatar up vector alone. However, if your data is oriented
differently, the default avatar up vector allows you to configure the
up direction for new avatars joining your collaboration session.
The collaboration section of the CAVEInteraction plugin panel also
contains an area for output messages indicating users who have joined
or left the collaboration session.
If you don't see the collaboration panel and you built ParaView yourself,
check your build configuration to make sure it is enabled.
The last bit of configuration required for collaboration is to tell
ParaView how to communicate your personal orientation to collaborators.
Recall that you configured some number of trackers when you were using
"VR Connections" section of the panel. In order to collaborate with
others, you must configure which of those trackers should be associated
with your head and hands.
The figure above shows the avatar configuration dialog in scenario
where we have configured a single connection named vrconn with
three separate trackers named puck, controller1, and
controller2. The event named vrconn.puck is the one being
used for head tracking, and as such, it has been associated with the
avatar head. The events named vrconn.controller1 and
vrconn.controller2 have been associated with the left and right
hand, respectively, since those are being used as wands with
buttons and valuators for triggering various interactions.
It is not required to have avatar hands configured. In this case,
your avatar will be displayed to collaborators with unconfigured
hands missing. Configuring a head, however, is required.
You may notice the "Share Navigation" checkbox in the avatar configuration
dialog. If this box is left unchecked, collaborators will only see
the tracked positions of your head and hands within the confines of
your physical space. If you check the "Share Navigation" checkbox,
then collaborators will see not only your tracked head and hand
positions, but also see any navigation you perform via your interactors
as they use the SetNavigationMatrix method described above.
Kitware hosts a collaboration server instance for demonstration and
testing purposes, it's the one configured by default when you load
the CAVEInteractionplugin (located at vrserver.kitware.com).
However, the collaboration server is an open source project that is
hosted here.
You can build the server yourself, following the instructions in the
projects README.md, and then deploy it within your own network.
Additionally, the server is included in Linux binary downloads of the
ParaView application, available from Kitwares download
website.
The CAVEInteraction plugin supports saving its own state (file extension
.pvvr), as well as saving all its state in a ParaView state file
(extension .pvsm).
Whether using the two buttons at the bottom of the CAVEInteraction panel
to save CAVEInteraction-specific state, or using ParaViews File menu to
save CAVEInteraction-specific state along with the rest of the ParaView
state, all aspects of the plugin state are saved. This includes all
information about connections, interactions (including the Python file,
in the case of a Python interactor style proxy), and collaboration.