Convert to pretty quotes

This commit is contained in:
Petr Kraus 2018-10-09 01:12:09 +02:00
parent 6af7805e91
commit d9e73159f2
16 changed files with 45 additions and 45 deletions

View File

@ -28,7 +28,7 @@ features:
updated, which enables writing new descriptors for frame N+1 while frame
N is executing.
* Relax the requirement that all descriptors in a binding that is
"statically used" must be valid, such that descriptors that are not
"`statically used`" must be valid, such that descriptors that are not
accessed by a submission need not be valid and can be updated while that
submission is executing.
* The final binding in a descriptor set layout can have a variable size

View File

@ -87,7 +87,7 @@ constants?
by `VK_EXT_descriptor_indexing` for inline uniform blocks?
*RESOLVED*: No, because inline uniform blocks are not allowed to be
"arrayed".
"`arrayed`".
A single binding with an inline uniform block descriptor type corresponds to
a single uniform block instance and the array indices inside that binding
refer to individual offsets within the uniform block (see issue #2).

View File

@ -52,7 +52,7 @@ None.
(1) Should we specify that the groups of four shader invocations used for
derivatives in a compute shader are the same groups of four invocations
that form a "quad" in shader subgroups?
that form a "`quad`" in shader subgroups?
*RESOLVED*: Yes.

View File

@ -8,7 +8,7 @@ include::meta/VK_NV_corner_sampled_image.txt[]
- Chris Lentini, NVIDIA
This extension adds support for a new image organization, which this
extension refers to as "corner-sampled" images.
extension refers to as "`corner-sampled`" images.
A corner-sampled image differs from a conventional image in the following
ways:
@ -74,7 +74,7 @@ None.
--
DISCUSSION: While naming this extension, we chose the most distinctive
aspect of the image organization and referred to such images as
"corner-sampled images".
"`corner-sampled images`".
As a result, we decided to name the extension NV_corner_sampled_image.
--
@ -101,7 +101,7 @@ Unnormalized coordinates are treated as already scaled for corner-sample
usage.
--
. Should we have a diagram in the "Image Operations" chapter demonstrating different texel sampling locations?
. Should we have a diagram in the "`Image Operations`" chapter demonstrating different texel sampling locations?
+
--
UNRESOLVED: Probaby, but later.

View File

@ -112,13 +112,13 @@ using this extension does not require a constant vertex number.
(2) Why do the built-in SPIR-V decorations for this extension include two
separate built-ins code:BaryCoordNV and code:BaryCoordNoPerspNV when a
"no perspective" variable could be decorated with code:BaryCoordNV and
"`no perspective`" variable could be decorated with code:BaryCoordNV and
code:NoPerspective?
*RESOLVED*: The SPIR-V extension for this feature chose to mirror the
behavior of the GLSL extension, which provides two built-in variables.
Additionally, it's not clear that its a good idea (or even legal) to have
two variables using the "same attribute", but with different interpolation
two variables using the "`same attribute`", but with different interpolation
modifiers.
=== Version History

View File

@ -15,7 +15,7 @@ implementations to reduce the amount of rasterization and fragment
processing work performed for each point, line, or triangle primitive.
For any primitive that produces one or more fragments that pass all other
early fragment tests, the implementation is permitted to choose one or more
"representative" fragments for processing and discard all other fragments.
"`representative`" fragments for processing and discard all other fragments.
For draw calls rendering multiple points, lines, or triangles arranged in
lists, strips, or fans, the representative fragment test is performed
independently for each of those primitives.

View File

@ -33,7 +33,7 @@ textures) or 64x32x32 (for 3D textures).
Each footprint query returns the footprint from a single texture level.
When using minification filters that combine accesses from multiple mipmap
levels, shaders must perform separate queries for the two levels accessed
("fine" and "coarse").
("`fine`" and "`coarse`").
The footprint query also returns a flag indicating if the texture lookup
would access texels from only one mipmap level or from two neighboring
levels.
@ -49,7 +49,7 @@ over the geometry in the second image that performs a footprint query for
each visible pixel to determine the set of pixels that it needs from the
first image.
This pass would accumulate an aggregate footprint of all visible pixels into
a separate "footprint image" using shader atomics.
a separate "`footprint image`" using shader atomics.
Then, when rendering the first image, the application can kill all shading
work for pixels not in this aggregate footprint.
@ -101,7 +101,7 @@ None.
*RESOLVED*: We expect that applications using this feature will want to use
a fixed granularity and accumulate coverage information from the returned
footprints into an aggregate "footprint image" that tracks the portions of
footprints into an aggregate "`footprint image`" that tracks the portions of
an image that would be needed by regular texture filtering.
If an application is using a two-dimensional image with 4x4 pixel
granularity, we expect that the footprint image will use 64-bit texels where
@ -124,7 +124,7 @@ aligned regions and may require updates to four separate footprint image
texels.
In this case, the implementation will return an anchor coordinate pointing
at the lower right footprint image texel and an offset will identify how
many "columns" and "rows" of the returned 8x8 mask correspond to footprint
many "`columns`" and "`rows`" of the returned 8x8 mask correspond to footprint
texels to the left and above the anchor texel.
If the anchor is (2,3), the 64 bits of the returned mask are arranged
spatially as follows, where each 4x4 block is assigned a bit number that
@ -201,13 +201,13 @@ when accumulating coverage include:
* When the returned footprint spans multiple texels in the footprint image,
each invocation need to perform four atomic operations.
In the previous issue, we had an example that computed separate masks for
"topLeft", "topRight", "bottomLeft", and "bottomRight".
"`topLeft`", "`topRight`", "`bottomLeft`", and "`bottomRight`".
When the invocations in a subgroup have good locality, it might be the
case the "top left" for some invocations might refer to footprint image
texel (10,10), while neighbors might have their "top left" texels at
case the "`top left`" for some invocations might refer to footprint image
texel (10,10), while neighbors might have their "`top left`" texels at
(11,10), (10,11), and (11,11).
If you compute separate masks for even/odd x and y values instead of
left/right or top/bottom, the "odd/odd" mask for all invocations in the
left/right or top/bottom, the "`odd/odd`" mask for all invocations in the
subgroup hold coverage for footprint image texel (11,11), which can be
updated by a single atomic operation for the entire subgroup.

View File

@ -113,7 +113,7 @@ None.
=== Issues
(1) When using shading rates that specify "coarse" fragments covering
(1) When using shading rates that specify "`coarse`" fragments covering
multiple pixels, we will generate a combined coverage mask that combines
the coverage masks of all pixels covered by the fragment.
By default, these masks are combined in an implementation-dependent
@ -165,7 +165,7 @@ With multi-pixel fragments, we follow a similar pattern, using the
intersection of the primitive and the *set* of pixels corresponding to the
fragment.
One important thing to keep in mind when using such "coarse" shading rates
One important thing to keep in mind when using such "`coarse`" shading rates
is that fragment attributes are sampled at the center of the fragment by
default, regardless of the set of pixels/samples covered by the fragment.
For fragments with a size of 4x4 pixels, this center location will be more
@ -175,7 +175,7 @@ When rendering a primitive that covers only a small part of a coarse
fragment, sampling a color outside the primitive can produce overly bright
or dark color values if the color values have a large gradient.
To deal with this, an application can use centroid sampling on attributes
where "extrapolation" artifacts can lead to overly bright or dark pixels.
where "`extrapolation`" artifacts can lead to overly bright or dark pixels.
Note that this same problem also exists for multisampling with single-pixel
fragments, but is less severe because it only affects certain samples of a
pixel and such bright/dark samples may be averaged with other samples that

View File

@ -1396,9 +1396,9 @@ Units in the Last Place (ULP)::
A measure of floating-point error loosely defined as the smallest
representable step in a floating-point format near a given value.
For the precise definition see <<spirvenv-precision-operation, Precision
and Operation of SPIR-V instructions>> or Jean-Michel Muller, "On the
definition of ulp(x).", RR-5504, INRIA.
Other sources may also use the term "unit of least precision".
and Operation of SPIR-V instructions>> or Jean-Michel Muller, "`On the
definition of ulp(x)`", RR-5504, INRIA.
Other sources may also use the term "`unit of least precision`".
Unnormalized::
A value that is interpreted according to its conventional

View File

@ -113,7 +113,7 @@ by a single shader invocation:
* (Complete definition): No other dynamic instances are program-ordered.
For instructions executed on the host, the source language defines the
program-order relation (e.g. as "sequenced-before").
program-order relation (e.g. as "`sequenced-before`").
[[memory-model-scope]]
== Scope
@ -231,7 +231,7 @@ SPIR-V supports the following memory semantics:
A memory barrier with this semantic is both a release and acquire
barrier.
NOTE: SPIR-V does not support "consume" semantics on the device.
NOTE: SPIR-V does not support "`consume`" semantics on the device.
The memory semantics operand also includes _storage class semantics_ which
indicate which storage classes are constrained by the synchronization.
@ -303,8 +303,8 @@ subsequence of A's scoped modification order that consists of:
NOTE: The atomics in the last bullet must: be mutually-ordered with A by
virtue of being in A's scoped modification order.
NOTE: This intentionally omits "atomic writes to M performed by the same
agent that performed A", which is present in the corresponding C++
NOTE: This intentionally omits "`atomic writes to M performed by the same
agent that performed A`", which is present in the corresponding C++
definition.
[[memory-model-synchronizes-with]]
@ -463,8 +463,8 @@ visibility>> operations may: be required for writes to be
NOTE: Happens-before is not transitive, but each of program-order and
inter-thread-happens-before<SC> are transitive.
These can be thought of as covering the "single-threaded" case and the
"multi-threaded" case, and it's not necessary (and not valid) to form chains
These can be thought of as covering the "`single-threaded`" case and the
"`multi-threaded`" case, and it's not necessary (and not valid) to form chains
between the two.
[[memory-model-availability-visibility]]
@ -652,15 +652,15 @@ API commands.
NOTE: It is expected that all invocations in a subgroup execute on the same
processor with the same path to memory, and thus availability and visibility
operations with subgroup scope can be expected to be "free".
operations with subgroup scope can be expected to be "`free`".
[[memory-model-location-ordered]]
== Location-Ordered
Let X and Y be memory accesses to overlapping sets of memory locations M,
where X != Y. Let (A~X~,R~X~) be the agent and reference used for X, and
(A~Y~,R~Y~) be the agent and reference used for Y. For now, let "->" denote
happens-before and "->^rcpo^" denote the reflexive closure of
(A~Y~,R~Y~) be the agent and reference used for Y. For now, let "`->`"
denote happens-before and "`->^rcpo^`" denote the reflexive closure of
program-ordered before.
If D~1~ and D~2~ are different memory domains, then let DOM(D~1~,D~2~) be a
@ -701,8 +701,8 @@ the following is true:
NOTE: The final bullet (synchronization through device/host domain) requires
API-level synchronization operations, since the device/host domains are not
accessible via shader instructions.
And "device domain" is not to be confused with "device scope", which
synchronizes through the "shader domain".
And "`device domain`" is not to be confused with "`device scope`", which
synchronizes through the "`shader domain`".
[[memory-model-access-data-race]]
== Data Race
@ -816,7 +816,7 @@ the same memory location.
This is critical to allow it to reason about memory that is reused in
multiple ways, e.g. across the lifetime of different shader invocations or
draw calls.
While GLSL (and legacy SPIR-V) applies the "coherent" decoration to
While GLSL (and legacy SPIR-V) applies the "`coherent`" decoration to
variables (for historical reasons), this model treats each memory access
instruction as having optional implicit availability/visibility operations.
GLSL to SPIR-V compilers should map all (non-atomic) operations on a

View File

@ -654,7 +654,7 @@ The precision of operations is defined either in terms of rounding, as an
error bound in ULP, or as inherited from a formula as follows.
.Correctly Rounded
Operations described as "correctly rounded" will return the infinitely
Operations described as "`correctly rounded`" will return the infinitely
precise result, [eq]#x#, rounded so as to be representable in
floating-point.
The rounding mode used is not defined but if [eq]#x# is exactly

View File

@ -127,9 +127,9 @@ lines are the adjacent vertices that are accessible in a geometry shader.
[NOTE]
.Note
====
The terminology {ldquo}vertex [eq]#i# {rdquo} means {ldquo}the vertex with
The terminology "`vertex [eq]#i#`" means "`the vertex with
index [eq]#i# in the ordered list of vertices defining this
primitive{rdquo}.
primitive`".
====
[NOTE]

View File

@ -1123,7 +1123,7 @@ of rasterization and fragment processing work performed for each point,
line, or triangle primitive.
For any primitive that produces one or more fragments that pass all prior
early fragment tests, the implementation may: choose one or more
"representative" fragments for processing and discard all other fragments.
"`representative`" fragments for processing and discard all other fragments.
For draw calls rendering multiple points, lines, or triangles arranged in
lists, strips, or fans, the representative fragment test is performed
independently for each of those primitives.

View File

@ -1211,9 +1211,9 @@ Occasionally, further requirements will be specified.
Most single-precision floating-point formats meet these requirements.
The special values [eq]#Inf# and [eq]#-Inf# encode values with magnitudes
too large to be represented; the special value [eq]#NaN# encodes {ldquo}Not
A Number{rdquo} values resulting from undefined: arithmetic operations such
as [eq]#0 / 0#.
too large to be represented; the special value [eq]#NaN# encodes "`Not A
Number`" values resulting from undefined: arithmetic operations such as
[eq]#0 / 0#.
Implementations may: support [eq]#Inf# and [eq]#NaN# in their floating-point
computations.

View File

@ -337,8 +337,8 @@ determine the version of Vulkan.
Implicit layers must: be disabled if they do not support a version at least
as high as pname:apiVersion.
See the <<LoaderAndLayerInterface, "Vulkan Loader Specification and
Architecture Overview">> document for additional information.
See the <<LoaderAndLayerInterface, Vulkan Loader Specification and
Architecture Overview>> document for additional information.
[NOTE]
.Note

View File

@ -819,7 +819,7 @@ ifdef::VK_VERSION_1_1[]
[[shaders-subgroup]]
== Subgroups
A _subgroup_ (see the subsection ``Control Flow'' of section 2 of the SPIR-V
A _subgroup_ (see the subsection "`Control Flow`" of section 2 of the SPIR-V
1.3 Revision 1 specification) is a set of invocations that can synchronize
and share data with each other efficiently.
An invocation group is partitioned into one or more subgroups.