More

How to show a selected polygon feature, using a single marker in its center


I've been asked to replicate a map where the selected polygon is shown with a red tick/check mark:

I need to tell ArcMap to use this symbol when a polygon is selected, so that un-selecting the polygon removes the check mark, and selecting a new polygon shows the check mark for that polygon. I don't want to edit a point layer and move the point around, which would be a manual workaround.

I tried setting the layer's selection symbol…

… but the only options here assume that the marker/image should be distributed multiple times throughout the selected polygon:

Is it possible to show a single marker within a selected polygon? I suspect this will require ArcObjects coding (which is probably out of the picture for this) but someone may know of a workaround.


As I don't know ArcObjects I would write a Python add-in (button tool on a toolbar) that would execute the following summarized pseudocode (not tested and can certainly be improved). Not as straightforward as clicking a hidden magic check box is the Select options, but possible…

  1. when you click the add-in button (use onClick function): create an in memory layer of centroids of the polygon features, apply it the wanted symbology (your check mark symbol) and set its definition query so that no centroid is visible
  2. when a polygon is clicked (using the onMouseDownMap and onMouseUpMap functions):
    • select the intersecting polygon feature (check first if any feature intersects the location clicked). The selection color must also be set to transparent before you run the add-in.
    • get the selected polygon's OID or any other relevant attribute and use it to change the definition query of the centroid layer, so that only the corresponding centroid is visible.

AVIATION WEATHER CENTER

This tutorial provides information on effectively using the Flight Path Tool. The following topics are covered:

How to Use the Configuration Manager

Each time the Flight Path Tool Application is started, the Configuration Manager allows a configuration file to be selected and loaded. The configuration file can save almost anything from the tool that is configurable such as data layers, data overlays, favorite flight routes, vertical cross sections, and zoom levels. There is a Default file that comes with the application, and if no files have been previously saved, this will be the only option. Once configuration files have been saved, those will be listed, as well as the Default file, and Other . The Other option is used to browse the local file system to load a configuration file from another location. Left click the arrow in the drop down box to select a configuration file, then click the Load button to load the application.

The Configuration Manager is also used to save and load configuration files. To load a different configuration file after the application has started, left click File->Load and choose a configuration file from the local file system.

How to Zoom

The zoom button is selected by default when the flight path tool is loaded. When a different button is used, the zoom button is automatically re-selected after that operation is completed. To zoom, click and hold the left mouse button and drag the mouse (creating a "rubber band") over the area of interest. Release the button and the map will redraw to the selected area.

The back button reverts to the previous zoom state. If the Continental U.S. is currently showing, and the zoom button is then used to zoom into Colorado, clicking the back button will re-draw the map over the Continental U.S. Note that the displayed data will be any currently selected Background Grids or Data Overlays

The overview button changes the area displayed in the main window. Left click the overview button . In the Overview window , click and hold the left mouse button down within the red box and drag the box to the new area of interest. Release the mouse button and the map in the main window will redraw to the selected area. Zoom on an area in the main window (see above) and a smaller box will appear. Note, the red box surrounds the area selected by zooming, so if a zoom state has not been selected, the box will surround the entire map. The edges of the red box may also be dragged to resize the zoom area.

The View menu item is used to view other parts of the world in the main window. If Europe is selected, the World map overlay should also be selected in the Data Layers area. In the Data Layers dialog , scroll down to the Map Overlays item and check World in the list of checkboxes. Then, select the desired data type from the Data Overlays check boxes. Note that the background grid data sets are not available for places outside the Continental U.S.

The pan button is for moving to different areas of the currently selected view. To pan the map, click the pan button , then click and drag the map with the left mouse button to the new area of interest. Release the button and the map will redraw to the selected area.

How to Change the Altitude

The altitude slider is marked with text indicating altitude in feet. Click and hold the left mouse button within the orange box on the slider and drag the mouse until the desired altitude turns orange. Release the mouse button and the map will be redrawn with data at the requested altitude.

How to Choose the Data Valid Time

The time slider is used to change the data in the main window. The red line shows the current time, and the orange slider determines the selected data valid time . Click and hold the left mouse button within the orange box on the time slider and drag the mouse until it is on the desired time. Release the mouse button and the map will be redrawn with data at the requested time.

How to Choose the Time Range and Animate

The time range shown below the map limits the valid times which can be selected and defines the time range which will be animated. To change the time range or animation properties, left click on the Configure->Time menu at the top of the main window to open the Time & Animation Configuration window .

The Time & Animation Configuration window has two tabs. The first, Time , is used to control the visible time range. The second, Animation , is used to control the length and speed of animation. The Cancel button closes the dialog without making any further changes, while the Apply button commits all changes.

    Time Tab
    The Time tab allows configuration of the number of tick marks on the time slider and whether or not the sliding button snaps onto the nearest tick mark . It also controls whether the application is in Archive Mode or Real-time Mode .

In Real-time Mode (the default), the application automatically looks for new data on the interval specified in the Update every pulldown menu . In this mode, there are several options for the range and selected time in the time slider . By default, the selected time is always now , meaning that it moves forward as time progresses New data are selected as they become available. Other options are to have the selected time stay Fixed at its current setting or to have the selected time always be Offset from the now time. The Date Range is the time range shown in the time selector and the time range over which animation will take place. By default this range moves forward as time progresses, always showing the same number of hours before now and after now . Selecting a Fixed start time instead, allows the range to be fixed from a Fixed start time to a Max date range .

In Archive Mode , the application does not automatically look for new data or change the selected time . In this mode, the Start time and End time must be explicitly set.

By default, the Number of Frames is equal to the number of tick marks in the time range . Sliding the selector left or right increases or decreases, respectively, the number of frames. The Frame interval automatically updates to show the amount of time between frames. The Number of Frames can also be set by typing a number into the Frame interval field and pressing the Enter key.

The Delay between each frame and the delay between the end of the loop and its automatic restart can also be set. The Enter key must be pressed for changes to be registered.

How to Select Data Types

Data Layers and the valid times at which data sets are available are shown in the scrollable window below the time slider in the main window.

Data Layers are organized into groups. Background Grids are gridded, colored, data sets, derived from forecast models: temperature, relative humidity, wind speed, icing probability, and turbulence probability. Data Overlays are icons which can be drawn over grids: wind barbs, METAR observations, pilot reports, AIRMETs and SIGMETs, and TAFs. Map Overlays are static topographic grid and line features which give geographic context to the data: topography, state outlines, country and coast outlines, rivers, roads, and counties. Data Layer groups can be hidden or expanded by clicking the circle to the left of their name.

  • Data Layer Visibility
    Data Layers are made visible by clicking the checkbox to the left of their name. Only one Background Grids layer can be visible as a grid at any given time. Turning on a grid automatically turns off all other grids. Background Grids layers , however, can also be drawn as unfilled contours. See the section on How to Configure Data Layers for more information on displaying contours over a grid. When a Data Layer is visible, the information in the Available Data Sets area to its right is shown in color. When a Data Layer is not visible, this information is greyed out.
  • Available Data Sets
    The Available Data Sets area shows the available valid times for each Data Layer . Model-derived data is represented by a circle below the time at which the data set is valid. Continuously updated data, such as METARs and pilot reports, are represented by a single circle with "wings". The width of the wings indicates the temporal extent of the available data. For these data sets, only the most recent reports are available, regardless of the selected time. Static data such as topography and state outlines are represented by the text Static Data . For Model-derived data , the selected data set is the latest data set which is valid on or before the currently selected time. This data set is indicated by a larger circle than for the unselected available data sets. As shown above, some Data Layers may be showing data sets whose valid times are different than others.
  • Layer Order Configuration
    Data Layers are drawn one on top of another like sheets of acetate. Any information which a Data Layer draws can obscure information from a Data Layer below it. Generally, gridded data layers are drawn first, followed by contours and wind barbs, topography, data overlays such as METARs, and finally, map overlays such as state boundaries. When a gridded data set is enabled as a contour, its Data Layer is automatically moved up so that the contours are drawn over any other gridded data set. In some cases, it may be required to manually change the Data Layer drawing order . Left click the Configure->Layer Order menu item in either the Flight Path Tool or Vertical Cross Sections window to change the order that data layers are drawn. The Data Layer Order Manager dialog box will appear. Left click and drag visible data layers up or down in the list to change the order the layers are drawn. The names of visible layers are drawn in black font, while the names of invisible layers are greyed out, and layers can be made Visible or Not Visible by selecting the appropriate button. The inner buttons move the selected layer one step, while the outer buttons move the layer to the top or bottom.

How to Configure Data Layers

All Data Layers can be configured to change their appearance, but the attributes which can be configured may be different depending on the Data Layer . To access a Data Layer's configuration dialog , left click the Configure menu at the top of the application, then select the Data Layer group , and finally, the Data Layer name .

Data Layer configuration dialogs all have Apply , OK , and Cancel buttons. Apply commits any changes to the Data Layer configuration and forces it to redraw, but does not close the dialog. OK commits any changes to the Data Layer configuration , forces it to redraw, and closes the dialog. Cancel ignores any changes to the Data Layer configuration and closes the dialog. All of the configurations described below require Apply or OK to take effect.

  • Common Configuration
    All Data Layers may be Enabled or Disabled . This is the same functionality as using the checkbox in the Data Layers area at the bottom of main window. All Data Layers also have transparency . Sliding the marker between More and Less alters the Data Layer's Transparency from clear to opaque, respectively.
  • Gridded Data Layers
    Gridded Data Layers can be drawn as grids and/or contours. When the Show data as Grid checkbox is selected, the data are drawn as color-filled polygons. When the Show data as Contours checkbox is selected, data are drawn as contours. Contour Intervals , Colors , and Labels may all be configured. In order to show contours of one gridded Data Layer over the grid of another gridded Data Layer, the Fill Contours option must be off .
  • Wind Barbs Data Layer
    The wind barbs Data Layer allows its Wind Barb Density to be configured. Sliding the marker between Fewer and More changes the wind barb density from very sparse to extremely dense, respectively. The midpoint results in barbs which are spaced approximately one barb apart. Because wind barbs are derived from the gridded windspeed field, there are also controls for drawing the wind speed as a grid or contours. These controls are identical to enabling the Wind Speed Data Layer in the Background Grids group.
  • METARs Data Layer
    The METARs Data Layer allows the observed variables to be independently selected, as well as adjustment of the METAR density.

When no METAR variables are turned on, which is the default, all METAR stations display an icon. The icon is colored according to the flight category and shaped according to the cloud coverage, as documented at:

METAR variables may be toggled on or off by clicking in the checkbox next to their name, or clicking their title in the key image. The variables for each station are drawn around the station location as shown below. Tables explaining the weather symbols can be found at:

  • All means all stations will be shown, and they will most likely overlap.
  • More means most stations will be shown, and some may overlap.
  • Default means the higher priority stations in the view will be shown, and none will overlap.
  • Fewer means only the highest priority stations will be shown, with space between them.

The number of PIREPs shown may be limited by using the altitude sliders. By default, PIREPs at all altitudes are shown. To limit the display to PIREPs between certain altitudes, drag the yellow sliders at the top and bottom of the altitude slider to define a new altitude range. The entire range may also be moved by dragging its dark grey center.

How to Create a Flight Path

  • Creating a Flight Path
    1. Left click the airplane button to begin a flight path.
    2. Using the mouse, place the cursor, which is now an airplane, on/near the desired departure airport and click the left mouse button to begin a new path.
    3. Left click once on/near all en-route airport(s) to continue the route. A line will appear connecting the points where you have clicked.
    4. Upon reaching the final destination, double-click on/near the destination airport. The flight path can also be finished by clicking on the "airplane button" once the path points have been clicked. Either of these actions completes the path and the program requests the data from the server and creates a new window called Vertical Cross Sections . This process may take a few seconds.

  • Navigating the Vertical Cross Sections Window
    The window that "pops up" after a flight route has been created is called Vertical Cross Sections . Once a path has been created, it is stored in the tool's history . All paths created in a session or saved in a configuration file can be accessed by selecting the View -> History menu item in the Vertical Cross Sections window . There is also a list of "classic" paths that can be selected from the View->Classics menu item.
  • Exporting and Printing
    The File->Export Image menu item can be used to save either the image in the main Flight Path Tool window, or a Vertical Cross Section as a png file on your local hard drive. The images can be printed by selecting File->Print Image and following the instructions in the subsequent dialog box.
  • Saving a Path in a Configuration File
    Save flight paths and their Vertical Cross Sections, by saving a configuration file . Left click on File->Save as in the Flight Path Tool window and the Configuration Manager dialog box will appear. Left click on the arrow in the drop down box and select Other to open a file browser and specify the name and location on the local hard disk for the new configuration file. When the saved configuration file is loaded during subsequent sessions, the data will be current, with the saved routes available via the View->History menu in the Vertical Cross Sections window.

Page loaded: 16:54 UTC | 09:54 AM Pacific | 10:54 AM Mountain | 11:54 AM Central | 12:54 PM Eastern


HLU Basics

Plot creation can get very complex depending on how you want to look at your data. NCL's interface for creating graphics is called the High Level Utilities (HLUs). There are five basic steps needed to create a simple plot. Issues such as combining plots, multiple plot frames, adding maps, and annotations are covered later.

    Open a workstation
      This is probably the simplest decision to make. X11 output will create an X11 window on the display specified by the DISPLAY environment variable. NCGM is a compact, efficient, and portable vector description of the graphical output. NCGM can be translated and viewed using the application ctrans and animated using the idt application. NCGM is recommended for the efficient storing of NCAR Graphics-generated output. PostScript can be output directly. The PostScript generated by the HLUs is encapsulated so it can be directly imported into documents or PostScript-compatible software and hardware devices. The workstation is responsible for managing all the colors used by plots. Each workstation has "wkColorMap" resources that can be set to a nx3 array of red, green, and blue colors. In addition, several default color maps can be set by using their names instead of an array of color combinations.

    The following are the Class names needed to create each of the output types using the create statement:

    • xWorkstationClass - X11 output
    • ncgmWorkstationClass - NCGM output
    • psWorkstationClass - PostScript output

      There are HLU objects whose primary purposes are to describe the data in a flexible fashion. The HLU plot object needs to know what the dimensionality of the data is, what the coordinates of the data are, whether the data contains missing values, and finally what data type the data is. If it weren't for these data objects, you'd have to convert your data into a common form before visualizing it. The data objects provide the flexibility to represent your data in the fashion it is represented. There are three basic types of data objects. There is the scalarFieldClass that describes scalar fields which are used by the contourPlotClass. There is the coordArraysClass which is used by the xyPlotClass, and finally there is the vectorFieldClass which is used by the vectorPlotClass and the streamlinePlotClass. A data object must be defined before the next step.

      Producing plots using the HLUs is a different a paradigm than most people are used to. In NCL, you create an instance of a plot which can then be drawn and/or manipulated, rather than setting up a single graphics state or calling a single procedural interface with a lot of parameters. HLU plots exist and can be drawn independently from each other. This provides a great deal of flexibility, but can be a little confusing for beginners. Just as you create workstations and data objects, you can create plot objects. There are many types of plots which are listed in the HLU class document.
      The draw procedure is called to draw a plot. Once a plot is created, draw can be called at any time, a feature unique to NCL.
      Once a plot has been drawn to an output frame and no other plots are going to be drawn on the same frame, a call to the frame procedure is needed. This call, when using either Postscript or CGM workstations, causes a new page to be inserted into the output file. If X11 output is being used, a frame call will wait for a button click (unless "wkPause" is set to False) and then clear the screen.

    The following is a short script demonstrating these steps. The available plot types are listed in the HLU class document. The following creates a simple filled contour plot from a data file included with the NCAR Graphics release. The following script should run without modification on your system if the NCAR Graphics data directory was installed.

    Setting the size of a plot


    Common use-cases for R users are where you might have a data.frame of

    • addresses for which you require coordinates ( google_geocode() )
    • coordinates for which you want the address ( google_reverse_geocode() )
    • coordinate pairs for which you want the directions between them ( google_directions() )

    In these cases Google’s API can only accept one request at a time. Therefore it’s not possible to ‘vectorise’ these functions as they have to operate one row at a time.

    The solution, therefore, will be to write some sort of loop to iterate over each row of the data.frame.

    An exaple (taken from user @Jazzurro ’s answer on StackOverflow) being where you have 3 pairs of coordinates, and you want to find the route (polyline) between each pair.

    In this example they used an lapply to iterate over the rows, but any looping mechanism would have worked as well.


    1.4 Generating the Polygons Within a Block of Terrain

    There are many ways to break up the task of generating a block of terrain on the GPU. In the simplest approach, we generate density values throughout a 3D texture (representing the corners of all the voxels in the block) in one render pass. We then run a second render pass, where we visit every voxel in the density volume and use the GS to generate (and stream out to a vertex buffer) anywhere from 0 to 15 vertices in each voxel. The vertices will be interpreted as a triangle list, so every 3 vertices make a triangle.

    For now, let's focus on what we need to do to generate just one of the vertices. There are several pieces of data we'd like to know and store for each vertex:

    • The world-space coordinate
    • The world-space normal vector (used for lighting)
    • An "ambient occlusion" lighting value

    These data can be easily represented by the seven floats in the following layout. Note that the ambient occlusion lighting value is packed into the .w channel of the first float4.

    The normal can be computed easily, by taking the gradient of the density function (the partial derivative, or independent rate of change, in the x, y, and z directions) and then normalizing the resulting vector. This is easily accomplished by sampling the density volume six times. To determine the rate of change in x, we sample the density volume at the next texel in the +x direction, then again at the next texel in the -x direction, and take the difference this is the rate of change in x. We repeat this calculation in the y and z directions, for a total of six samples. The three results are put together in a float3, and then normalized, producing a very high quality surface normal that can later be used for lighting. Listing 1-1 shows the shader code.

    Example 1-1. Computing the Normal via a Gradient

    The ambient occlusion lighting value represents how much light, in general, is likely to reach the vertex, based on the surrounding geometry. This value is responsible for darkening the vertices that lie deep within nooks, crannies, and trenches, where less light would be able to penetrate. Conceptually, we could generate this value by first placing a large, uniform sphere of ambient light that shines on the vertex. Then we trace rays inward to see what fraction of the vertices could actually reach the vertex without colliding with other parts of the terrain, or we could think of it as casting many rays out from the vertex and tracking the fraction of rays that can get to a certain distance without penetrating the terrain. The latter variant is the method our terrain demo uses.

    To compute an ambient occlusion value for a point in space, we cast out 32 rays. A constant Poisson distribution of points on the surface of a sphere works well for this. We store these points in a constant buffer. We can𠅊nd should—reuse the same set of rays over and over for each vertex for which we want ambient occlusion. (Note: You can use our Poisson distribution instead of generating your own search for "g_ray_dirs_32" in models ables.nma on the book's DVD.) For each of the rays cast, we take 16 samples of the density value along the ray𠅊gain, just by sampling the density volume. If any of those samples yields a positive value, the ray has hit the terrain and we consider the ray fully blocked. Once all 32 rays are cast, the fraction of them that were blocked—usually from 0.5 to 1�omes the ambient occlusion value. (Few vertices have ambient occlusion values less than 0.5, because most rays traveling in the hemisphere toward the terrain will quickly be occluded.)

    Later, when the rock is drawn, the lighting will be computed as usual, but the final light amount (diffuse and specular) will be modulated based on this value before we apply it to the surface color. We recommend multiplying the light by saturate(1 – 2*ambient_occlusion), which translates an occlusion value of 0.5 into a light multiplier of 1, and an occlusion value of 1 to a light multiplier of 0. The multiplier can also be run through a pow() function to artistically influence the falloff rate.

    1.4.1 Margin Data

    You might notice, at this point, that some of the occlusion-testing rays go outside the current block of known density values, yielding bad information. This scenario would create lighting artifacts where two blocks meet. However, this is easily solved by enlarging our density volume slightly and using the extra space to generate density values a bit beyond the borders of our block. The block might be divided into 32 3 voxels for tessellation, but we might generate density values for, say, a 44 3 density volume, where the extra "margin" voxels represent density values that are actually physically outside of our 32 3 block. Now we can cast occlusion rays a little farther through our density volume and get more-accurate results. The results still might not be perfect, but in practice, this ratio (32 voxels versus 6 voxels of margin data at each edge) produces nice results without noticeable lighting artifacts. Keep in mind that these dimensions represent the number of voxels in a block the density volume (which corresponds to the voxel corners) will contain one more element in each dimension.

    Unfortunately, casting such short rays fails to respect large, low-frequency terrain features, such as the darkening that should happen inside large gorges or holes. To account for these low-frequency features, we also take a few samples of the real density function along each ray, but at a longer range—intentionally outside the current block. Sampling the real density function is much more computationally expensive, but fortunately, we need to perform sampling only about four times for each ray to get good results. To lighten some of the processing load, we can also use a "lightweight" version of the density function. This version ignores the higher-frequency octaves of noise because they don't matter so much across large ranges. In practice, with eight octaves of noise, it's safe to ignore the three highest-frequency octaves.

    The block of pseudocode shown in Listing 1-2 illustrates how to generate ambient occlusion for a vertex.

    Note the use of saturate(d * 9999), which lets any positive sample, even a tiny one, completely block the ray. However, values deep within the terrain tend to have progressively higher density values, and values farther from the terrain surface do tend to progressively become more negative. Although the density function is not strictly a signed distance function, it often resembles one, and we take advantage of that here.

    Example 1-2. Pseudocode for Generating Ambient Occlusion for a Vertex

    During ray casting, instead of strictly interpreting each sample as black or white (hit or miss), we allow things to get "fuzzy." A partial occlusion happens when the sample is near the surface (or not too deep into the surface). In the demo on the book's DVD, we use a multiplier of 8 (rather than 9999) for short-range samples, and we use 0.5 for long-range samples. (Note that these values are relative to the range of values that are output by your particular density function). These lower multipliers are especially beneficial for the long-range samples it becomes difficult to tell that there are only four samples being taken. Figures 1-18 through 1-20 show some examples.

    Figure 1-18 Long-Range Ambient Occlusion Only

    Figure 1-19 Both Long-Range and Short-Range Ambient Occlusion

    Figure 1-20 The Regular Scene, Shaded Using Ambient Occlusion

    1.4.2 Generating a Block: Method 1

    This section outlines three methods for building a block. As we progress from method 1 to method 3, the techniques get successively more complex, but faster.

    The first (and simplest) method for building a block of terrain is the most straight forward, and requires only two render passes, as shown in Table 1-1.

    Table 1-1. Method 1 for Generating a Block

    Geometry Shader Output Struct

    Fill density volume with density values.

    Visit each (nonmargin) voxel in the density volume.

    The geometry shader generates and streams out up to 15 vertices (5 triangles) per voxel.

    Count: 0/3/6/9/12/15

    However, this method is easily optimized. First, the execution speed of a geometry shader tends to decrease as the maximum size of its output (per input primitive) increases. Here, our maximum output is 15 vertices, each consisting of 7 floats𠅏or a whopping 105 floats. If we could reduce the floats to 32 or less—or even 16 or less—the GS would run a lot faster.

    Another factor to consider is that a GS is not as fast as a VS because of the geometry shader's increased flexibility and stream-out capability. Moving most of the vertex generation work, especially the ambient occlusion ray casting, into a vertex shader would be worthwhile. Fortunately, we can accomplish this, and reduce our GS output size, by introducing an extra render pass.

    1.4.3 Generating a Block: Method 2

    The problems described in method 1𠅎xtremely large geometry shader output (per input primitive) and the need to migrate work from the geometry shaders to the vertex shaders𠅊re resolved by this design, shown in Table 1-2, which is an impressive 22 times faster than method 1.

    Table 1-2. Method 2 for Generating a Block

    Geometry Shader Output Struct

    Fill density volume with density values.

    Visit each voxel in the density volume stream out a lightweight marker point for each triangle to be generated. Use a stream-out query to skip remaining passes if no output here.

    March through the triangle list, using the vertex shader to do most of the work for generating the vertex. The geometry shader is a pass-through, merely streaming the result out to a buffer.

    Here, the gen_vertices pass has been broken into list_triangles and gen_vertices. The list_triangles pass has much smaller maximum output it outputs, at most, five marker points. Each point represents a triangle that will be fleshed out later, but for now, it's only a single uint in size (an unsigned integer—the same size as a float). Our maximum output size has gone from 105 to 5, so the geometry shader will execute much faster now.

    The crucial data for generating each triangle is packed into the uint:

    Six integer values are packed into this one uint, which tells us everything we need to build a triangle within this voxel. The x, y, and z bit fields (6 bits each, or [0..31]) indicate which voxel, within the current block, should contain the generated triangle. And the three edge fields (each 4 bits) indicate the edge [0..11] along which the vertex should be placed. This information, plus access to the density volume, is all the vertex shader in the last pass needs to generate the three vertices that make up the triangle. In that final pass, all three vertices are generated in a single execution of the vertex shader and then passed to the geometry shader together, in a large structure, like this:

    The GS then writes out three separate vertices from this one big structure. This activity produces a triangle list identical to what method 1 produced, but much more quickly.

    Adding another render pass is helpful because it lets us skip the final (and most expensive) pass if we find that there are no triangles in the block. The test to determine if any triangles were generated merely involves surrounding the list_triangles pass with a stream output query (ID3D10Query with D3D10_QUERY_SO_STATISTICS), which returns the number of primitives streamed out. This is another reason why we see such a huge speed boost between methods 1 and 2.

    Method 2 is faster and introduces the useful concept of adding a new render pass to migrate heavy GS work into the VS. However, method 2 has one major flaw: it generates each final vertex once for each triangle that uses it. A vertex is usually shared by an average of about five triangles, so we're doing five times more work than we need to.

    1.4.4 Generating a Block: Method 3

    This method generates each vertex once, rather than an average of five times, as in the previous methods. Despite having more render passes, method 3 is still about 80 percent faster than method 2. Method 3, instead of producing a simple, nonindexed triangle list in the form of many vertices (many of them redundant), creates a vertex pool and a separate index list. The indices in the list are now references to the vertices in the vertex pool, and every three indices denote a triangle.

    To produce each vertex only once, we limit vertex production within a cell to only edges 3, 0, and 8. A vertex on any other edge will be produced by another cell—the one in which the vertex, conveniently, does fall on edge 3, 0, or 8. This successfully produces all the needed vertices, just once.


    Many intelligent systems, such as assistive robots, augmented reality trainers or unmanned vehicles, need to know their physical location in the environment in order to fulfill their task. While relying exclusively on natural landmarks for that task is the preferred option, their use is somewhat limited because the proposed methods are complex, require high computational power, and are not reliable in all environments. On the other hand, artificial landmarks can be placed in order to alleviate these problems. In particular, square fiducial markers are one of the most popular tools for camera pose estimation due to their high performance and precision. However, the state-of-the-art methods still perform poorly under difficult image conditions, such as camera defocus, motion blur, small scale or non-uniform lighting.

    This paper proposes a method to robustly detect this type of landmarks under challenging image conditions present in realistic scenarios. To do so, we re-define the marker identification problem as a classification one based on state-of-the-art machine learning techniques. Second, we propose a procedure to create a training dataset of synthetically generated images affected by several challenging transformations. Third, we show that, in this problem, a classifier can be trained using exclusively synthetic data, performing well in real and challenging conditions. Different types of classifiers have been tested to prove the validity of our proposal (namely, Multilayer Perceptron (MLP), Convolutional Neural Network (CNN) and Support Vector Machine (SVM)), and statistical analyses have been performed in order to determine the best approach for our problem. Finally, the obtained classifiers have been compared to the ArUco and AprilTags fiducial marker systems in challenging video sequences. The results obtained show that the proposed method performs significantly better than previous approaches, making the use of this technology more reliable in a wider range of realistic scenarios such as outdoor scenes or fast moving cameras.


    Short- versus long-distance travel

    Much of the early game in No Man’s Sky revolves around gathering enough fuel to move from one star system to the next. In fact, the updated in-game tutorials and single-player campaign do an excellent job of explaining what resources you need to find and how to use them.

    As you move from system to system, you’ll find that each one has its own space station. Inside every space station is a teleporter that will connect you with every other space station that you’ve visited. Traveling between teleporters is completely free, and you’ll find that your starship is docked and waiting for you when you arrive as well. You can even build a Base Teleport Module near your own base to link that into the network as well.

    Portals are the key to fast travel in the No Man’s Sky galaxy. Hello Games via Polygon

    But the space stations themselves, along with any bases that you’ve connected, form a small, personal network only you can use. It’s easy to go back and retrace your steps, but these teleporters aren’t going to get you anywhere new.

    To go further, you’ll first need to find a portal.


    Grayscale Conversion and Canny Edge Detection

    We don’t actually care about the colors of the picture at all, just the differences between their intensity values. In order to make the edge detection process simpler, we can convert the image into grayscale. This will remove color information and replace it with a single intensity value for each pixel of the image. Now our solution is to use Canny Edge Detection to find areas of the image that rapidly change over the intensity value.

    For those of us that aren’t interested writing this algorithm ourselves, OpenCV ships with a single call implementation ready to use. Let’s try running the Canny Edge Detection algorithm on our cropped image with some reasonable starter thresholds.

    We did it!, the image now contains only the single pixels which are indicative of an edge. But there’s a problem… We accidentally detected the edges of our cropped region of interest!

    Not to worry, we can fix this problem by simply place the region of interest cropping after the Canny process in our pipeline. We also need to adjust the region of interest utility function to account for the fact that our image is now grayscale:

    Now let’s run our pipeline once more.


    Installing 3D Slicer¶

    The “Preview Release” of 3D Slicer is updated daily (process starts at 11pm ET and takes few hours to complete) and represents the latest development including new features and fixes.

    The “Stable Release” is usually updated a few times a year and is more rigorously tested.

    Slicer is generally simple to install on all platforms. It is possible to install multiple versions of the application on the same user account and they will not interfere with each other. If you run into mysterious problems with your installation you can try deleting the application settings files .

    Only 64-bit Slicer installers are available to download. Developers can attempt to build 32-bit versions on their own if they need to run Slicer on a 32-bit operating system. That said, this should be carefully considered as many clinical research tasks, such as processing of large CT or MR volumetric datasets, require more memory than can be accommodated with a 32-bit program.

    Once downloaded, follow the instructions below to complete installation:

    Windows¶

    Current limitation: Installation path must only contain English (ASCII printable) characters because otherwise some Python packages may not load correctly (see this issue for more details).

    Run Slicer from the Windows start menu.

    Use “Apps & features” in Windows settings to remove the application.

    Open the install package (.dmg file).

    Drag the Slicer application (Slicer.app) to your Applications folder (or other location of your choice).

    This step is necessary because content of a .dmg file is opened as a read-only volume, and you cannot install extensions or Python packages into a read-only volume.

    Delete the Slicer.app folder to uninstall.

    Note for installing a Preview Release: Currently, preview release packages are not signed. Therefore, when the application is started the first time the following message is displayed: “Slicer… can’t be opened because it is from an unidentified developer”. To resolve this error, locate the application in Finder and right-click (two-finger click) and click Open . When it says This app can’t be opened go ahead and hit cancel. Right click again and say Open (yes, you need to repeat the same as you did before - the outcome will be different than the first time). Click the Open (or Open anyway ) button to start the application. See more explanation and alternative techniques here.

    Installing using Homebrew¶

    Slicer can be installed with a single terminal command using the Homebrew package manager:

    This procedure avoids the typical google-download-mount-drag process to install macOS applications.

    Preview releases can be installed using homebrew-cask-versions :

    Linux¶

    Open the tar.gz archive and copy directory to the location of your choice.

    Installation of additional packages may be necessary depending on the Linux distribution and version, as described in subsections below.

    Run the Slicer executable.

    Remove the directory to uninstall.

    Note: Slicer is expected to work on the vast majority of desktop and server Linux distributions. The system is required to provide at least GLIBC 2.17 and GLIBCCC 3.4.19. For more details, read here.

    Debian / Ubuntu¶

    The following may be needed on fresh debian or ubuntu:

    To run Slicer-4.11-2020-09-30 on older debian (e.g. debian 9) you may also need:

    ArchLinux¶

    ArchLinux runs the strip utility by default this needs to be disabled in order to run Slicer binaries. For more information see this thread on the Slicer Forum.


    3rd-party vs. OEM

    You have your eye on a super-cheap 3rd-party speedlight, amiright? While it might make sense, just understand what you're giving up by going with the lower pricetag. Build quality, copy consistency, and component quality are likely to be more variable than with OEM. Support, warranty, and resale value are likely to be of much lower quality. And future/backwards compatibility is likely to be lower, as are TTL accuracy and consistency and AF-assist function.

    Most 3rd party manufacturers reverse-engineer the hotshoe communication protocol, and as a result, while the flash may work very well with a current camera model, it may not work as well with a future or older model or, say, a film body with what is ostensibly the same flash protocol. To ease this issue, some 3rd party flashes can upgrade their firmware, but most of the super-cheap manuals (YN-660, Godox TT600, Amazon Basics, etc.) cannot.

    Also keep in mind, there's a lot of rebranding going on at the cheap end of the market. Neewer, for example, doesn't manufacture flashes, they simply rebrand models from Meike, Yongnuo, Godox, Triopo, Voking, etc. etc. It can sometimes be hard to tell what you've got. And the AmazonBasics flash and Neewer TT560? I suspect they're only two of the many rebrandings of the Godox TT560.