Supported with:
Additional library information: Contents, Object Model Diagram
See the following sections for more information about this namespace:
For further information see:
Understanding coordinate management in ArcGIS
How to snap a point to a coordinate grid
Working with spatial references
Point
A Point object is a two-dimensional point, optionally with M, Z, and ID attributes.
Polyline
A Polyline object is an ordered collection of paths that optionally has M, Z, and ID attributes. The IPointCollection interface on Polyline manipulates copies of its vertices. Use the IGeometryCollection interface to directly access its paths and the ISegmentCollection interface to directly access its segments.
For further information see:
How to create a polyline
How to create a multipart polyline
How to create a polygon
How to build a polygon using segments and points
How to create a multipatch using a series of triangles
For further information see:
Working with spatial references
Creating a custom geographic coordinate system
Creating a custom projected coordinate system
Creating a custom vertical coordinate system
For further information see:
How to perform an affine transformation
Understanding geotransformations
Creating a predefined geotransformation
- Engine
- ArcView
- ArcEditor
- ArcInfo
- Server
Additional library information: Contents, Object Model Diagram
The Geometry library provides vector representations for points, multipoints, polylines, polygons, and MultiPatches. Geometries are used by the geodatabase and graphic element systems to define the shapes of features and graphics. They supply operations that are used by the Editor and map symbology systems to define and symbolize features. Spatial references describe where these geometries are located on the earth. They also define the resolution and valid values for the coordinates used by the geometries. Almost every system in ArcObjects uses geometries and spatial references in some way.
In addition, you can buffer sets of geometries robustly and efficiently using the BufferConstruction object. Finally, you can access common and useful geometric operations, such as projection and buffering, from any execution environment over the internet, using the GeometryServer web service, or over a LAN using the Web ADF and the GeometryServer proxy object
To use geometries accurately, consistently, and predictably, you need to understand how geometries and spatial references work together.
See the following sections for more information about this namespace:
- Introduction to geometry objects
- Introduction to spatial references and coordinate grids
- Using geometry objects
- GeometryServer
- BufferConstruction object
- Using spatial reference objects
- Using transformation objects
Introduction to geometry objects
In addition to the top level geometry objects (Points, Multipoints, Polylines, Polygons, and MultiPatches), Paths, Rings, Segments, TriangleStrips, TriangleFans, and Triangles serve as building blocks for polylines, polygons, and MultiPatches. Polylines contain paths, polygons contain rings and MultiPatches contain TriangleStrips, TriangleFans, triangles, and rings. Paths and rings are sequences of vertices connected by segments. A segment is a parametric function that defines the shape of the curve connecting its vertices. Segment types include CircularArc, Line, EllipticArc, and BezierCurve. In addition to the X and Y coordinates for each vertex in a geometry, these vertex attributes can be defined: M (measure), Z (elevation), and ID (foreign key). Envelopes describe the spatial extent of other geometries, and GeometryBags provide operations on collections of geometries.
Geometry objects are not meant to be extended by developers.
Multipoint, polyline, polygon, and MultiPatch geometries have constraints on their shapes. For example, a polygon must have its interior clearly defined and separated from its exterior. When all constraints are satisfied, a geometry is said to be simple. When a constraint is violated, or it is not known if the constraint is met, then the geometry is said to be non-simple. The ITopologicalOperator, IPolygonN, and IPolylineN interfaces provide operations for testing and enforcing simplicity. The SDK documentation for the Simplify method of ITopologicalOperator describes these rules precisely.
Each vertex of a geometry, in addition to its X and Y coordinates, can have optional attributes, called vertex attributes. The Z vertex attribute is a double-precision value that can be used to represent heights or depths relative to a vertical coordinate system. The M vertex attribute, also called a measure, is a double-precision value that can be used to establish a linear reference system on a geometry (usually a polyline), such as the exits along a highway. The ID vertex attribute, also called a point ID, is a signed integer that can be used as a foreign database key to associate additional information with each vertex such as survey measurements. Vertex attributes can be added to or removed from any geometry at any time and in any combination. For example, a polyline could start out with no vertex attributes, have Zs added to it, then have IDs added to it, then have its Zs removed. When a geometry is aware of its vertex attributes, those attributes will be persisted as part of the geometry and will be included in the output of topological operations that involve that geometry. If a geometry is not aware of its attributes, then those attributes will be ignored when the geometry is persisted, and the attributes will not appear in the output of a topological operation involving that geometry. The attribute awareness of a geometry is controlled by the IZAware, IMAware, or IPointIDAware interface. The attribute values are not actually removed from a geometry if its awareness is disabled.
Geometries, especially the segment types, have a rich set of methods for defining their location. For example, the IConstructCircularArc interface shows the different ways you can define a circular arc segment. Typically, interfaces or methods that include the word "construct" in their name use a set of input parameters (including other geometries) to completely define the target geometry. The inputs are not altered.
Top level geometries support the classical set-theoretic operations for generating new geometries including union, intersection, difference, and symmetric difference. These operations are exposed on the ITopologicalOperator interface and usually operate on a pair of geometries at a time. ITopologicalOperator.ConstructUnion can operate on more than two. New geometries are created to represent the results. Top level geometries also support the IRelationalOperator interface, which can perform a variety of tests on a pair of geometries such as disjoint, contains, and touches.
Both of these interfaces use the spatial reference associated with the input geometries when determining the answer. Two important properties of a spatial reference are its coordinate grid and its XY tolerance. Different values for these properties can cause the relational and topological operators to produce different results.
Introduction to spatial references and coordinate grids
Geometries are georeferenced to the real world through a spatial reference. A spatial reference includes the coordinate system and several coordinate grids. A coordinate system includes information such as the unit of measure, the earth model used and, sometimes, how the data was projected. The coordinate grids are mathematical functions that define the XY, Z, and M resolution values and the corresponding domain extents. Each spatial reference also has a set of tolerance values. A geometry’s coordinates (or vertex attributes) must fall within the domain extent and be rounded to the resolution. The tolerance values are used by geometric operations that relate coordinates or compute new ones.
XY values can be georeferenced with a geographic or projected coordinate system. A geographic coordinate system (GCS) is defined by a datum, an angular unit of measure, usually either degrees or grads, and a prime meridian. A projected coordinate system (PCS) consists of a linear unit of measure, usually meters or feet, a map projection, the specific parameters used by the map projection, and a GCS. A PCS or GCS can have a vertical coordinate system as an optional property. A vertical coordinate system (VCS) georeferences Z values. A VCS includes either a geodetic or vertical datum, a linear unit of measure, an axis direction, and a vertical shift. M, or measure, values do not have a coordinate system.
A spatial reference that includes an unknown coordinate system (UCS) includes a grid (domain extent) and a tolerance only. It is not possible to georeference a geometry associated with a UCS. If at all possible, you should not use a UCS. When a GCS or PCS is used, appropriate default XY domain extent, resolution, and tolerance values can be calculated. All grid and tolerance information for coordinates and attributes are associated with the PCS, GCS, or UCS. A VCS georeferences Z coordinates but does not have a well-defined default grid.
For further information see:
Understanding coordinate management in ArcGISHow to snap a point to a coordinate grid
Working with spatial references
Using geometry objects
Some of the more widely used geometry objects are described in this section.
GeometryEnvironment
GeometryEnvironment provides a way of creating geometries from different inputs and setting or getting global variables for controlling the behavior of geometry methods. It also provides Java and .NET friendly versions of methods originally defined on other geometry objects (see the IGeometryBridge and IGeometryBridge2 interfaces). The GeometryEnvironment object is a singleton object, so calling new several times doesn't create a new object each time. Instead, it returns a reference to the existing GeometryEnvironment. See Interacting with singleton objects for details.
For additional information, see Working with GeometryEnvironment.
Envelope
For many applications, the coordinates of a geometry are treated as existing in a planar (Cartesian) coordinate space. An Envelope object is a rectangle with sides parallel to that space defining the spatial extent of a geometry. It can also describe the extent of the geometry’s Z, and M vertex attributes. You can obtain copies of envelopes of other geometries or create envelopes directly. In the first case, the spatial reference of the envelope is the spatial reference of its defining geometry.
One way to use envelopes for geometric operations is explained in How to find the combined extent of two geometries.
GeometryBag
GeometryBag is a set of references to other geometry objects supporting the IGeometry interface. Objects of any level (Polyline, Polygon, MultiPatch, Multipoint, segments, etc.) can be added to GeometryBag via the IGeometryCollection interface. However, placing objects of different geometry types may not be suitable when using GeometryBag in some topological operations. For example, GeometryBag must contain strictly polygons, strictly polylines, or strictly envelopes when using it as a parameter to ITopologicalOperator.ConstructUnion. Also, Project/ProjectEx methods should not be applied on GeometryBag if it contains segments.
GeometryBag is a set of references to other geometry objects supporting the IGeometry interface. Objects of any level (Polyline, Polygon, MultiPatch, Multipoint, segments, etc.) can be added to GeometryBag via the IGeometryCollection interface. However, placing objects of different geometry types may not be suitable when using GeometryBag in some topological operations. For example, GeometryBag must contain strictly polygons, strictly polylines, or strictly envelopes when using it as a parameter to ITopologicalOperator.ConstructUnion. Also, Project/ProjectEx methods should not be applied on GeometryBag if it contains segments.
As with other geometries, a geometry bag has a spatial reference property. A geometry added to a bag will reference the same spatial reference as the bag. If the bag has no spatial reference, then neither will the added geometry after it is added to the bag. This is usually an error. Take care to define the spatial reference of the bag before adding geometries to it.
How to create a union of several polygons illustrates how to use GeometryBag to union a series of polygons, creating a single output geometry.
Point
A Point object is a two-dimensional point, optionally with M, Z, and ID attributes.
Multipoint
A Multipoint object is an ordered collection of points that optionally has M, Z, and ID attributes. The IPointCollection interface implemented by a Multipoint object provides direct access to its point elements. This is different than how IPointCollection behaves when that interface is used to provide access to the vertices of a polyline or polygon. In that case, you are working with copies of the points.
A Multipoint object is an ordered collection of points that optionally has M, Z, and ID attributes. The IPointCollection interface implemented by a Multipoint object provides direct access to its point elements. This is different than how IPointCollection behaves when that interface is used to provide access to the vertices of a polyline or polygon. In that case, you are working with copies of the points.
The following examples illustrate how to create a multipoint geometry:
How to create a multipoint
How to create a multipoint object from the vertices of a polyline
How to create a multipoint
How to create a multipoint object from the vertices of a polyline
Polyline
A Polyline object is an ordered collection of paths that optionally has M, Z, and ID attributes. The IPointCollection interface on Polyline manipulates copies of its vertices. Use the IGeometryCollection interface to directly access its paths and the ISegmentCollection interface to directly access its segments.
A Polyline object is an ordered collection of paths that optionally has M, Z, and ID attributes. The IPointCollection interface on Polyline manipulates copies of its vertices. Use the IGeometryCollection interface to directly access its paths and the ISegmentCollection interface to directly access its segments.
The IPointCollection and ISegmentCollection interfaces are also available on Path objects and are characterized the same way.
The polyline structure is shown in the following diagram.
Polyline
A Polyline object is an ordered collection of paths that optionally has M, Z, and ID attributes. The IPointCollection interface on Polyline manipulates copies of its vertices. Use the IGeometryCollection interface to directly access its paths and the ISegmentCollection interface to directly access its segments.
The IPointCollection and ISegmentCollection interfaces are also available on Path objects and are characterized the same way.
The polyline structure is shown in the following diagram.
Clothoid spirals are supported by construct peicewise linear approximation with the new interface IConstructClothoid in ArcGIS 10.
Clothoid spirals are supported by construct peicewise linear approximation with the new interface IConstructClothoid in ArcGIS 10.
Starting at ArcGIS 10, vertical segments in polylines are also supported. For detailed information, please see Working with vertical polyline segments.
Polygon
A Polygon object is a collection of rings ordered by their containment relationship; optionally has M, Z, and ID attributes. Each ring is a collection of segments. The IPointCollection interface on Polygon and Ring manipulates copies of vertices. Use the IGeometryCollection and ISegmentCollection interfaces to access rings and segments directly.
The polygon structure is shown in the following diagram.
MultiPatch
The MultiPatch geometry type was initially developed to address the needs for a three-dimensional (3D) polygon geometry type—unconstrained by two-dimensional (2D) validity rules. Without eliminating the constraints that rule out vertical walls, for example, representing extruded 2D lines and footprint-polygons for 3D visualization would not be possible. In addition to eliminating 2D constraints, MultiPatches provide more control over polygon face orientations, and a better definition of polygon face interiors.
Since ArcGIS version 9.0, MultiPatches have also been extended to provide advanced geometric representations for 3D features. These complex 3D objects can be part of a Synthetic Landscape Model, stored in a geodatabase. The target of these extensions is improved visualization quality.
MultiPatches describe 3D geometries that can have multiple, textured surfaces. They can also store vertex normals, vertex ids, vertex measures and several part-level attributes. You can create MultiPatches by importing data from a variety of different file formats (3D Studio Max .3ds files, OpenFlight .flt files, COLLADA .dae files, Sketchup .skp files, VRML .wrl files). You can also create them programmatically in several different ways. MultiPatches without textures, normals, or part attributes can be defined in a manner similar to building a polygon: create the parts, create a MultiPatch, then use the latter’s IGeometryCollection interface to add the parts. Creating a MultiPatch with textures, normals or part attributes requires the use of the GeneralMultiPatchCreator helper object (requires a 3D Analyst license). You can obtain information on normals and materials from an existing MultiPatch by using its IGeneralMultiPatchInfo interface.
The relationship between the objects used in MultiPatch construction is illustrated here.
MultiPatches support the IRelationalOperator3D interface, which contains the Disjoint3D method, and the IProximityOperator3D interface, which has methods for reporting the nearest distance from a query geometry to the MultiPatch and the nearest point on the MultiPatch from a query geometry. These interfaces treat MultiPatches as a collection of surfaces with no interior, so if you have a MultiPatch in the shape of a cube, and a point apparently inside that cube, the point will be classified as disjoint because it is not intersecting any of the cube's sides.
MultiPatches contain multiple parts: TriangleStrips, TriangleFans, Triangles, and rings. The latter is the same COM object type as used in polygons, but unconstrained by 2D validity rules, so they can be vertical or have vertical segments. Each part in a MultiPatch has several additional properties:
- Type (TriangleStrip, TriangleFan, Triangle, Outer Ring, etc)
- Priority—used to control the drawing order of overlapping parts
- An index into the array of materials (see the 'Material and Textures' section below).
You can extrude polylines and polygons in various ways to define MultiPatches, or construct them explicitly (see the IConstructMultiPatch interface and the IExtrude interface on the GeometryEnvironment singleton object). MultiPatches can be stored as the geometry for a feature or used to symbolize points by drawing 3D models at the points’ locations. You can specify per-vertex normals to orient faces and provide more control over how MultiPatches are lit. MultiPatches can contain materials (specifying color, texture, and transparency information) and texture coordinates that specify the placement of textures on each part. ArcScene provides style galleries containing MultiPatch models for 3D buildings (industrial and residential), trees, vehicles, street 'furniture' and other thematic categories.
You can compute the volume of a MultiPatch. The number will be meaningful if the MultiPatch is a closed surface (or has an opening that forms an implied lower face). You can compute the tessellated surface area using IArea3D. You can project a MultiPatch onto a plane (usually the xy plane). This operation is used to generate MultiPatch footprints when MultiPatches are operands to a spatial relation method. A ray can also be intersected with a MultiPatch.
Triangle Strips
A TriangleStrip is a type of surface patch, defined by a collection of points that specify the triangle surfaces that comprise it. For a TriangleStrip with six points, the triangle surfaces are defined by points: (0, 1, 2), (2, 1, 3), (2, 3, 4), (4, 3, 5). The front side, or face, is established by orienting the first three vertices clockwise.
Triangle Fans
A TriangleFan is a type of surface patch defined by a collection of points that specify the triangle surfaces that comprise it. Unlike a TriangleStrip, a TriangleFan is comprised of a set of 3D triangles where the first point defines the apex or origin point and is included in all of the triangle surfaces. For a TriangleFan with six points, the triangle surfaces are defined by points: (0, 1, 2), (0, 2, 3), (0, 3, 4), (0, 4, 5). The front side is established by orienting the first three vertices clockwise.
Triangles
A Triangle part is a type of surface patch used to construct a MultiPatch, defined by a collection of points that specify the triangle surfaces that comprise it. Each consecutive triplet of vertices defines a new triangle, and the vertex count must be a multiple of three. For a Triangles part with six points, the triangle surfaces are defined by points: (0, 1, 2), (3, 4, 5). The front side of each triangle is established by orienting its vertices clockwise.
Rings as used in MultiPatches
Rings are used in polygons and in MultiPatches. Those in MultiPatches are not constrained by 2D validity rules, so you can use a MultiPatch containing vertically oriented rings to represent vertical walls. A given ring does not have to be planar. That is, the vertices do not have to all lie in one plane. The interior of a non-planar ring is defined by how it is tessellated, which is currently done by the OpenGL utility library tessellator. A group of rings in a MultiPatch can define a surface with holes. A ring type (First Ring, Outer Ring, Inner Ring, etc.) is assigned to the ring immediately after it has been added to the MultiPatch:
- Outer Ring: the exterior or outer ring of a MultiPatch surface (analogous to an outer ring in a polygon)
- Inner Ring: the interior or hole within a MultiPatch surface (analogous to an inner ring in a polygon)
- First Ring: the first ring of a group of rings with an unspecified containment relation
- Ring: another ring of a group of rings with an unspecified containment relation
The sequence typically consists of an Outer Ring, representing the outer boundary of the surface, followed by a number of Inner Rings representing holes. When the individual types of rings in a collection of rings representing a polygonal patch with holes are unknown, the sequence must start with First Ring, followed by a number of Rings. A sequence of Rings not preceded by a First Ring is treated as a sequence of Outer Rings without holes. In cases in which a MultiPatch is defined using Rings with no holes or interiors, the basic Ring role is used for convenience, although Outer Ring would work just as well.
The illustration below shows a MultiPatch model of a building (note the use of texture, described in detail later). Five of the parts, forming a ring group, are highlighted in blue and gray. They are located on the closest building. The blue part is an outer ring. The gray parts are inner ring, constructed in the same plane as the outer ring.
Outer Ring and Inner Ring nomenclature is a more structured form for representing a surface than a First Ring and Ring series. The former explicitly defines that any Inner Ring that immediately follows an Outer Ring is a hole in the Outer Ring. In the sequence, 'Inner' must always follow 'Outer' or 'Inner'. Otherwise, it is an error. Anything other than 'Inner' would stop the sequence for the Outer/Inner group.
Surface Normals in MultiPatches
Each MultiPatch vertex can have a persistent surface normal associated with it. Without explicit, persistent surface normals, only flat shading of faces (individual triangles) is possible. With normals, lighting equations can be interpolated across faces to provide smoother shading. This might let you render smoothly curving surfaces using less geometric data.
Materials and Textures
MultiPatches can store surface descriptions, called materials. A material adds detail to the MultiPatch’s appearance without increasing its geometric complexity. Each part can reference a material and each material can be referenced by more than one part. A material has a color, an overall transparency, and optionally a texture, which can have a transparent color or an alpha channel. Texture coordinates specify the placement of textures on each part. A texture is a raster image that is draped onto the surface of the MultiPatch. JPEG compressed raster files can be used.
In the context of a MultiPatch creator, material 'index' and material 'type' are synonymous and represent the index of a material within the MultiPatch used to symbolize a part. Parts within a MultiPatch can share materials.
Texture coordinates are pairs of floating point values associated with a vertex. The set of texture vertices associated with a MultiPatch tell the rendering system how to drape the texture. Texture coordinates are normally in the range [0,1]. Values larger than 1 cause the texture to be tiled along that coordinate axis.
The picture below shows the relationship between the texture data (left side), texture coordinates: (u, v) pairs, geometric coordinates (x, y, z) tuples, and the rendered MultiPatch model (right side). The origin of the texture coordinate system is the upper left hand corner.
Creating a MultiPatch that uses textures requires use of the GeometryMaterial COM object (3DAnalyst), an array of GeometryMaterials called a GeometryMaterialList and a GeneralMultiPatchCreator (3DAnalyst). The GeneralMultiPatchCreator assembles the material and geometry information into a MultiPatch shapefile buffer and then hydrates a MultiPatch from it. This process requires that all texture data is expanded in memory. MultiPatches themselves support shapefile buffers containing compressed texture data. Different MultiPatches cannot share texture data. For further information see:
How to create a polylineHow to create a multipart polyline
How to create a polygon
How to build a polygon using segments and points
How to create a multipatch using a series of triangles
GeometryServer
The GeometryServer object provides access to useful and powerful geometric operations, such as buffering and projection.
The example below computes the area of a set of polygons obtained from an ArcGIS Server Map Service. The polygons returned from the map service are defined in a geographic spatial reference. They are densified, projected to an equal area spatial reference and their areas are then computed computed.
This example also uses the SOAP API to communicate over the internet with these web services.
For additional details on using more than one ArcGIS server web service in a single client application, refer to other parts of the ESRI SOAP documentation. Briefly, the "Add Web Reference" functionality available through the Visual Studio interface is not used. Instead, the wsdl.exe tool is invoked directly, using the ‘/sharetypes’ option, in order to combine WSDL documents from multiple web services. The resulting source code file is then manually added to the C#.NET project.
The command used to generate the shared types proxies is:
wsdl.exe /sharetypes /language:CS /namespace: ArcGISWebServices /out:.\ArcGISSOAP_SharedObjects.cs /protocol:SOAP http://geometria/arcgis/services/MapService/MapServer?wsdl http://geometria/arcgis/services/Geometry/GeometryServer?wsdl
See the .NET SDK documentation on wsdl.exe for more details.
[C#]
// This C# example illustrates using a Geometry Service to compute the areas of polygons obtained from a Map Service
using System;
using System.Collections.Generic;
using System.Text;
using ArcGISWebServices;
namespace GSClientTest
{
class Program
{
static void Main(string[] args)
{
ArcGISGeometryServer gs = new ArcGISGeometryServer();
gs.Url = "http://geometria/arcgis/services/Geometry/GeometryServer";
ArcGISMapServer ms = new ArcGISMapServer();
ms.Url = "http://geometria/arcgis/services/States/MapServer";
// Ask the "States" Map Service for all features in the first layer, which is assumed to be a polygon layer.
QueryFilter qf = new QueryFilter();
qf.SubFields = "*";
RecordSet rs = ms.QueryFeatureData("Layers", 0, null);
// Find the geometry field in the returned record set, and also get the spatial reference for the record set
SpatialReference sr = null;
int iGeometry;
for (iGeometry = 0; iGeometry < rs.Fields.FieldArray.Length; iGeometry++)
if (rs.Fields.FieldArray[iGeometry].Type ==
esriFieldType.esriFieldTypeGeometry)
{
Field f = rs.Fields.FieldArray[iGeometry];
sr = f.GeometryDef.SpatialReference;
break;
}
int j;
Geometry[] aG = new Geometry[rs.Records.Length];
for (j = 0; j < rs.Records.Length; j++)
aG[j] = (Geometry)rs.Records[j].Values[iGeometry];
Geometry[] aGDens = gs.Densify(sr, aG, 0.1, false, 0.0);
// Project the geometries to an equal area projection in order to calculate their areas
// The queried spatial reference (2163) is US National Atlas Lambert Azimuthal Equal Area.
SpatialReference projSR = gs.FindSRByWKID(null, 2163, 0, true, true);
Geometry[] aGProj = gs.Project(sr, projSR, false, null, null, aGDens);
double[] lengths = new double[rs.Records.Length];
// "cast" the array of geometries to an array of polygons
Polygon[] aP = new Polygon[rs.Records.Length];
aGProj.CopyTo(aP, 0);
double[] areas = gs.GetAreasAndLengths(sr, aP, out lengths);
System.Console.WriteLine();
}
}
}
[VB.NET]
Module Module1
Sub Main()
Dim gs As ESRI.ArcGIS.Geometry.IGeometryServer
gs = New ESRI.ArcGIS.Geometry.GeometryServerImpl()
gs.Url = "http://geometria/arcgis/services/Geometry/GeometryServer"
Dim ms As ESRI.ArcGIS.Carto.MapServer
ms = New ESRI.ArcGIS.Carto.MapServerClass()
ms.Url = "http://geometria/arcgis/services/States/MapServer"
'Ask the "States" Map Service for all features in the first layer, which is assumed to be a polygon layer.
Dim qf As ESRI.ArcGIS.Geodatabase.IQueryFilter
qf = New ESRI.ArcGIS.Geodatabase.QueryFilter()
qf.SubFields = "*"
Dim rs As ESRI.ArcGIS.Geodatabase.IRecordSet
rs = ms.QueryFeatureData("Layers", 0, Nothing)
'Find the geometry field in the returned record set, and also get the spatial reference for the record set
Dim sr As ESRI.ArcGIS.Geometry.ISpatialReference
sr = Nothing
Dim iGeometry As Integer
For iGeometry = 0 To rs.Fields.FieldArray.Length - 1
If (rs.Fields.FieldArray(iGeometry).Type = ESRI.ArcGIS.Geodatabase.esriFieldType.esriFieldTypeGeometry) Then
Dim f As ESRI.ArcGIS.Geodatabase.IField
f = rs.Fields.FieldArray(iGeometry)
sr = f.GeometryDef.SpatialReference
End If
Next iGeometry
Dim j As Integer
Dim aG(rs.Records.Length) As ESRI.ArcGIS.Geometry.IGeometry
For j = 0 To rs.Records.Length
aG(j) = rs.Records(j).Values(iGeometry)
Next j
Dim aGDens As ESRI.ArcGIS.Geometry.IGeometryArray
aGDens = gs.Densify(sr, aG, 0.1, False, 0.0)
'Project the geometries to an equal area projection in order to calculate their areas
'The queried spatial reference (2163) is US National Atlas Lambert Azimuthal Equal Area.
Dim projSR As ESRI.ArcGIS.Geometry.ISpatialReference
projSR = gs.FindSRByWKID(Nothing, 2163, 0, True, True)
Dim aGProj As ESRI.ArcGIS.Geometry.IGeometryArray
aGProj = gs.Project(sr, projSR, False, Nothing, Nothing, aGDens)
'"cast" the array of geometries to an array of polygons
Dim aP(rs.Records.Length) As ESRI.ArcGIS.Geometry.Polygon
aGProj.CopyTo(aP, 0)
Dim lengths As ESRI.ArcGIS.esriSystem.IDoubleArray
Dim areas As ESRI.ArcGIS.esriSystem.IDoubleArray
gs.GetAreasAndLengths(sr, aP, areas, lengths)
End Sub
End Module
BufferConstruction object
The BufferConstruction object implements an updated, highly robust geometric buffering operation. It is designed to process a large number of inputs without requiring that all inputs and outputs exist in main memory at the same time. It provides additional buffering options that are not possible with the current ArcObjects Buffer method. For example, geodesic buffers can be generated around points. Finally, it provides a compatibility operation that lets clients of the existing Buffer operation use this object with a minimal impact on their existing code. In fact, the simplest way to use the buffer construction object is:
IBufferConstruction bc = new BufferConstruction();
Buffer = bc.Buffer(myInputGeometry, distance);
Buffer = bc.Buffer(myInputGeometry, distance);
The capabilities of the BufferConstruction object include:
- The IBufferConstructionProperties interface provides several new options for construction of geometric buffers
- Polyline buffers can be constructed to the left or the right of the polyline
- Polygon buffers can exclude the interior of the polygon
- Curve segments can be added to the output buffers
- Buffers can be dissolved together in places where they overlap
- Separate polygons can be unioned for each non-overlapping portion of a buffer
- True geodesic buffers can be generated around points that are in a geographic coordinate system
- Geometries of different types can be buffered together
- Multiple buffers can be generated around the same set of inputs
- In the context of a single buffering operation, different features can be buffered by different distances
The buffer construction object requires the use of temporary files. They will be placed in the directory identified by the pathname contained in the environment variable ARCTMPDIR. If the variable does not exist, then the system TEMP location will be used. Temporary files are removed at the end of a buffer operation.
Using spatial reference objects
Some of the more widely used spatial reference objects are described in this section.
SpatialReferenceEnvironment
SpatialReferenceEnvironment is a singleton object used for creating, loading, and storing entire spatial references. Spatial references are often cloned and copied internally. Setting up SpatialReferenceEnvironment as a singleton object conserves resources and makes it less likely that a spatial reference will be deleted before it is no longer in use. SpatialReferenceEnvironment can also create predefined components used for building spatial references (projections, datums, prime meridians, and so on). You can also use it to convert between low and high precision spatial references.
The following topics include additional information on how to use SpatialReferenceEnvironment for the functions referenced above.
- Using the SpatialReferenceEnvironment
- Constructing a high- or low-precision spatial reference
- Converting between high- and low-precision spatial references
- How to import or export a spatial reference
GeographicCoordinateSystem
GeographicCoordinateSystem includes a name, angular unit of measure, datum (which includes a spheroid), and a prime meridian. It is a model of the earth in a 3D coordinate system. Latitude-longitude, or lat/lon, data is in a GCS. You can access the majority of the properties and methods through the IGeographicCoordinateSystem interface, with a few more properties available in IGeographicCoordinateSystem2. Although you probably won't need to create a custom GCS, the IGeographicCoordinateSystemEdit interface contains the Define and DefineEx methods if you do.
The following code demonstrates how the DefineEx method can be used. It uses a SpatialReferenceFactory to create the Datum, PrimeMeridian, and Unit components.
[VCPP]
// Smart pointer variables used
IDatumPtr ipDatum;
IPrimeMeridianPtr ipPrimeMeridian;
IUnitPtr ipUnit;
IAngularUnitPtr ipAngularUnit;
// Create the factory and the component parts
ISpatialReferenceFactoryPtr ipFactory(CLSID_SpatialReferenceEnvironment);
ipFactory->CreateDatum(esriSRDatum_OSGB1936, &ipDatum);
ipFactory->CreatePrimeMeridian(esriSRPrimeM_Greenwich, &ipPrimeMeridian);
ipFactory->CreateUnit(esriSRUnit_Degree, &ipUnit);
IGeographicCoordinateSystemEditPtr ipGeoCSEdit(CLSID_GeographicCoordinateSystem)
;
IGeographicCoordinateSystemPtr ipGCS;
// QI for the AngularUnit from the Unit
// - this is achieved by the SmartPointers
ipAngularUnit = ipUnit;
// Make the string descriptions
CComBSTR name(_T("User Defined Geographic Coordinate System"));
CComBSTR alias(_T("UserDefined"));
CComBSTR abbreviation(_T("User"));
CComBSTR remarks(_T("User Define GCS based on OSGB1936"));
CComBSTR useage(_T("Suitable for the UK"));
// Make the call
HRESULT hr;
hr = ipGeoCSEdit->DefineEx(name, alias, abbreviation, remarks, useage, ipDatum,
ipPrimeMeridian, ipAngularUnit);
// QI for the result
ipGCS = ipGeoCSEdit;
[C#]
IDatum ipDatum;
IPrimeMeridian ipPrimeMeridian;
IUnit ipUnit;
IAngularUnit ipAngularUnit;
// Create the factory and the component parts
ISpatialReferenceFactory ipFactory = new SpatialReferenceEnvironmentClass();
ipDatum = ipFactory.CreateDatum((int)esriSRDatumType.esriSRDatum_OSGB1936);
ipPrimeMeridian = ipFactory.CreatePrimeMeridian((int)
esriSRPrimeMType.esriSRPrimeM_Greenwich);
ipUnit = ipFactory.CreateUnit((int)esriSRUnitType.esriSRUnit_Degree);
IGeographicCoordinateSystemEdit ipGeoCSEdit = new GeographicCoordinateSystemClass();
// QI for the AngularUnit from the Unit
ipAngularUnit = ipUnit as IAngularUnit;
// Make the string descriptions
string name = "User Defined Geographic Coordinate System";
string alias = "UserDefined";
string abbreviation = "User";
string remarks = "User Define GCS based on OSGB1936";
string useage = "Suitable for the UK";
// Make the call
ipGeoCSEdit.DefineEx(name, alias, abbreviation, remarks, useage, ipDatum,
ipPrimeMeridian, ipAngularUnit);
// QI for the result
IGeographicCoordinateSystem ipGCS = ipGeoCSEdit as IGeographicCoordinateSystem;
[VB.NET]
Dim ipDatum As ESRI.ArcGIS.Geometry.IDatum
Dim ipPrimeMeridian As ESRI.ArcGIS.Geometry.IPrimeMeridian
Dim ipUnit As ESRI.ArcGIS.Geometry.IUnit
Dim ipAngularUnit As ESRI.ArcGIS.Geometry.IAngularUnit
'Create the factory and the component parts
Dim ipFactory As ESRI.ArcGIS.Geometry.ISpatialReferenceFactory
ipFactory = New ESRI.ArcGIS.Geometry.SpatialReferenceEnvironment()
ipDatum = ipFactory.CreateDatum(ESRI.ArcGIS.Geometry.esriSRDatumType.esriSRDatum_OSGB1936)
ipPrimeMeridian = ipFactory.CreatePrimeMeridian(ESRI.ArcGIS.Geometry.esriSRPrimeMType.esriSRPrimeM_Greenwich)
ipUnit = ipFactory.CreateUnit(ESRI.ArcGIS.Geometry.esriSRUnitType.esriSRUnit_Degree)
Dim ipGeoCSEdit As ESRI.ArcGIS.Geometry.IGeographicCoordinateSystemEdit
ipGeoCSEdit = New ESRI.ArcGIS.Geometry.GeographicCoordinateSystem()
'QI for the AngularUnit from the Unit
ipAngularUnit = ipUnit
'Make the string descriptions
Dim Name As String, _alias As String, abbreviation As String, remarks As String, useage As String
Name = "User Defined Geographic Coordinate System"
_alias = "UserDefined"
abbreviation = "User"
remarks = "User Define GCS based on OSGB1936"
useage = "Suitable for the UK"
'Make the call
ipGeoCSEdit.DefineEx(Name, _alias, abbreviation, remarks, useage, ipDatum, ipPrimeMeridian, ipAngularUnit)
'QI for the result
Dim ipGCS As ESRI.ArcGIS.Geometry.IGeographicCoordinateSystem
ipGCS = ipGeoCSEdit
To access the hundreds of predefined GCSs, ISpatialReferenceFactory has the CreateGeographicCoordinateSystem method. The predefined GCSs are listed in the esriSRGeoCSType, esriSRGeoCS2Type, and esriSRGeoCS3Type enumerations. The parts of a GCS, such as the datum, angular unit, and prime meridian, are objects as well. All support ISpatialReference2 and ISpatialReferenceFactory. Make use of the predefined objects available in the various esriSR* enumerations.
The IGeographicCoordinateSystem2 interface supplies the AngularConversionFactor method, which will return a value that converts the units of measure between two GCSs.
The ExtentHint, LeftLongitude, and RightLongitude properties are interrelated. Usually, data in a GCS has longitude values between -180 and 180 if the unit of measure is degrees. Some datasets are designed to use a minimum longitude value of 0 or -360. The LeftLongitude property controls whether the data is considered as -360 to 0, -180 to 180, or 0 to 360. This is only pertinent when you’re inverse projecting projected coordinates for storage in a GCS-based feature class that has a non-standard longitude range. The ArcObjects framework usually deals with this detail for you. Note that the left longitude property is not considered when comparing two GCSs for equality.
GetHorizon returns a WKSEnvelope describing the extent of a GCS based on its unit of measure and LeftLongitude. This method can be used to define a standard coordinate grid for the GCS. It is used internally by the ISpatialReferenceResolution.ConstructFromHorizon method.
ProjectedCoordinateSystem
ProjectedCoordinateSystem includes a name, linear unit of measure, GCS, map projection, and any parameters required by the map projection. Using the term 'projection' for a coordinate system is imprecise. The term 'projection' should be used for the actual mathematical function. Transverse Mercator and Lambert Conformal Conic are map projections. Universal transverse Mercator (UTM) and State Plane are PCSs that are based on particular map projections. Each PCS must include a GCS. Map projection parameters can be linear, angular, or unitless. A unitless parameter includes scale factor and option. Angular parameters are the central meridian, standard parallels, and latitude of origin. Linear parameters are false easting and false northing. Use the GetDefaultParameters method on the IProjection interface to determine which parameters a particular map projection expects.
The parts of a PCS, such as the projection, linear unit, and GCS, are objects as well. All support ISpatialReference2 and ISpatialReferenceFactory. When defining a custom PCS, make use of the predefined objects available in the various esriSR* enumerations.
You can access the majority of the properties and methods through the IProjectedCoordinateSystem2 interface although a few more properties are available in IProjectedCoordinateSystem3 and IProjectedCoordinateSystem4. The IProjectedCoordinateSystemEdit interface contains the Define method, which allows you to define a custom PCS. To access the hundreds of predefined PCSs, ISpatialReferenceFactory has the CreateProjectedCoordinateSystem method. The predefined PCSs are listed in the esriSRProjCSType, esriSRProjCS2Type, esriSRProjCS3Type, and esriSRProjCS4Type enumerations.
VerticalCoordinateSystem
VerticalCoordinateSystem includes a name, linear unit of measure, a vertical or geodetic (horizontal) datum, a direction, and optionally, a vertical shift. A VCS defines the origin of Z coordinate values. A common application is for Z values to represent elevations or depths, when Z values increase "up" (against the direction of gravity) or decrease 'down' (in the direction of gravity), respectively. You can access the majority of the properties and methods through the IVerticalCoordinateSystem interface. Although you probably won't need to create a custom VCS, the IVerticalCoordinateSystemEdit interface contains the Define method if you do.
For further information see:
Working with spatial referencesCreating a custom geographic coordinate system
Creating a custom projected coordinate system
Creating a custom vertical coordinate system
Using transformation objects
The transformation objects can be used to apply various linear coordinate transformations to top-level geometries (Points, Multipoints, Polylines, and Polygons). Typically, you create a particular kind of transformation object, define its properties, then pass it to the geometry being transformed to perform the transform on that geometry. Only occasionally will you need to extract the points from the geometry and transform them directly, or transform arrays of WKSPoints directly.
AffineTransformation2D
AffineTransformation2D is a 3x3 matrix that implements conformal (angle preserving) affine and general affine transformations. A minimum of two pairs of points are required to exactly define a conformal affine transformation. A 2D conformal transformation is also called a Helmert transformation. A minimum of three pairs of points are required to define a general affine transformation. Additional points are required to determine root mean square (RMS) error information for the transformation. One use for AffineTransformation2D is to register a paper map into a known coordinate system when digitizing.
AffineTransformation3D
AffineTransformation3D is a 4x4 matrix that supports definition of general affine transformations from control points. It will not determine conformal affine transformations.
ProjectiveTransformation2D
ProjectiveTransformation2D requires a minimum of four pairs of points to define the transformation. The projective transformation is only used to transform coordinates digitized directly off high altitude aerial photography or aerial photographs of relatively flat terrain assuming that there is no systematic distortion in the air photos. A projective transformation uses eight parameters.
Geotransformation
Moving your data between projected coordinate systems may also involve transforming geographic coordinate systems. Because the geographic coordinate systems contain datums that are based on spheroids, a geographic transformation (geotransformation) also changes the underlying spheroid. Other frequently used terms for a geographic transformation include datum shift and geodetic transformation.
A geographic transformation is a mathematical operation that takes the coordinates of a point in one geographic coordinate system and returns the coordinates of the same point in another geographic coordinate system. There is also an inverse transformation to allow coordinates to be put back to the first coordinate system from the second. There are many different types of mathematical operations used to achieve this task.
For further information see:
How to perform an affine transformationUnderstanding geotransformations
Creating a predefined geotransformation