Vulkan-Docs/doc/specs/vulkan/chapters/synchronization.txt

1331 lines
63 KiB
Plaintext

// Copyright (c) 2015-2016 The Khronos Group Inc.
// Copyright notice at https://www.khronos.org/registry/speccopyright.html
[[synchronization]]
= Synchronization and Cache Control
Synchronization of access to resources is primarily the responsibility of
the application. In Vulkan, there are four forms of concurrency during
execution: between the host and device, between the queues, between
queue submissions, and between commands within a command buffer.
Vulkan provides the application with a set of
synchronization primitives for these purposes. Further, memory caches and
other optimizations mean that the normal flow of command execution does not
guarantee that all memory transactions from a command are immediately
visible to other agents with views into a given range of memory. Vulkan
also provides barrier operations to ensure this type of synchronization.
Four synchronization primitive types are exposed by Vulkan. These are:
* <<synchronization-fences,Fences>>
* <<synchronization-semaphores,Semaphores>>
* <<synchronization-events,Events>>
* <<synchronization-pipeline-barriers,Barriers>>
Each is covered in detail in its own subsection of this chapter. Fences
are used to communicate completion of execution of command buffer
submissions to queues back to the application. Fences can: therefore be used
as a coarse-grained synchronization mechanism. Semaphores are generally
associated with resources or groups of resources and can: be used to marshal
ownership of shared data. Their status is not visible to the host. Events
provide a finer-grained synchronization primitive which can: be signaled at
command level granularity by both device and host, and can: be waited upon
by either. Barriers provide execution and memory synchronization between
sets of commands.
[[synchronization-fences]]
== Fences
// refBegin VkFence - Opaque handle to a fence object
Fences can: be used by the host to determine completion of execution of
_queue operations_.
A fence's status is always either _signaled_ or _unsignaled_. The host can:
poll the status of a single fence, or wait for any or all of a group of
fences to become signaled.
Fences are represented by sname:VkFence handles:
include::../api/handles/VkFence.txt[]
// refEnd VkFence
// refBegin vkCreateFence Create a new fence object.
To create a new fence object, use the command
include::../api/protos/vkCreateFence.txt[]
* pname:device is the logical device that creates the fence.
* pname:pCreateInfo points to a slink:VkFenceCreateInfo structure
specifying the state of the fence object.
* pname:pAllocator controls host memory allocation as described in the
<<memory-allocation, Memory Allocation>> chapter.
* pname:pFence points to a handle in which the resulting fence object is
returned.
include::../validity/protos/vkCreateFence.txt[]
// refBegin VkFenceCreateInfo - Structure specifying parameters of a newly created fence
The sname:VkFenceCreateInfo structure is defined as:
include::../api/structs/VkFenceCreateInfo.txt[]
* pname:flags defines the initial state and behavior of the fence. Bits
which can: be set include:
+
--
// refBegin VkFenceCreateFlagBits - Bitmask specifying initial state and behavior of a fence
include::../api/enums/VkFenceCreateFlagBits.txt[]
--
+
If pname:flags contains ename:VK_FENCE_CREATE_SIGNALED_BIT then the fence
object is created in the signaled state. Otherwise it is created in the
unsignaled state.
include::../validity/structs/VkFenceCreateInfo.txt[]
// refBegin vkDestroyFence Destroy a fence object
To destroy a fence, call:
include::../api/protos/vkDestroyFence.txt[]
* pname:device is the logical device that destroys the fence.
* pname:fence is the handle of the fence to destroy.
* pname:pAllocator controls host memory allocation as described in the
<<memory-allocation, Memory Allocation>> chapter.
include::../validity/protos/vkDestroyFence.txt[]
// refBegin vkGetFenceStatus Return the status of a fence.
To query the status of a fence from the host, use the command
include::../api/protos/vkGetFenceStatus.txt[]
* pname:device is the logical device that owns the fence.
* pname:fence is the handle of the fence to query.
Upon success, fname:vkGetFenceStatus returns the status of the fence,
which is one of:
* ename:VK_SUCCESS indicates that the fence is signaled.
* ename:VK_NOT_READY indicates that the fence is unsignaled.
include::../validity/protos/vkGetFenceStatus.txt[]
// refBegin vkResetFences Resets one or more fence objects.
To reset the status of one or more fences to the unsignaled state, use the
command:
include::../api/protos/vkResetFences.txt[]
* pname:device is the logical device that owns the fences.
* pname:fenceCount is the number of fences to reset.
* pname:pFences is a pointer to an array of pname:fenceCount fence
handles to reset.
If a fence is already in the unsignaled state, then resetting it has no
effect.
include::../validity/protos/vkResetFences.txt[]
[[synchronization-fences-signaling]]
Fences can: be signaled by including them in a
<<devsandqueues-submission, queue submission>> command, defining a
queue operation to signal that fence.
This _fence signal operation_ defines the first half of a memory dependency,
guaranteeing that all memory accesses defined by the queue submission are
made available, and that queue operations described by that submission have
completed execution.
This half of the memory dependency does not include host availability of
memory accesses.
The second half of the dependency can: be defined by flink:vkWaitForFences.
Fence signal operations for flink:vkQueueSubmit additionally include all
queue operations previously submitted via flink:vkQueueSubmit in their half
of a memory dependency.
// refBegin vkWaitForFences Wait for one or more fences to become signaled.
To cause the host to wait until any one or all of a group of fences
is signaled, use the command:
include::../api/protos/vkWaitForFences.txt[]
* pname:device is the logical device that owns the fences.
* pname:fenceCount is the number of fences to wait on.
* pname:pFences is a pointer to an array of pname:fenceCount fence
handles.
* pname:waitAll is the condition that must: be satisfied to successfully
unblock the wait. If pname:waitAll is ename:VK_TRUE, then the condition
is that all fences in pname:pFences are signaled. Otherwise, the
condition is that at least one fence in pname:pFences is signaled.
* pname:timeout is the timeout period in units of nanoseconds.
pname:timeout is adjusted to the closest value allowed by the
implementation-dependent timeout accuracy, which may: be substantially
longer than one nanosecond, and may: be longer than the requested
period.
If the condition is satisfied when fname:vkWaitForFences is called, then
fname:vkWaitForFences returns immediately. If the condition is not satisfied
at the time fname:vkWaitForFences is called, then fname:vkWaitForFences will
block and wait up to pname:timeout nanoseconds for the condition to become
satisfied.
If pname:timeout is zero, then fname:vkWaitForFences does not
wait, but simply returns the current state of the fences. ename:VK_TIMEOUT
will be returned in this case if the condition is not satisfied, even though
no actual wait was performed.
If the specified timeout period expires before the condition is satisfied,
fname:vkWaitForFences returns ename:VK_TIMEOUT. If the condition is
satisfied before pname:timeout nanoseconds has expired,
fname:vkWaitForFences returns ename:VK_SUCCESS.
[[synchronization-fences-devicewrites]]
fname:vkWaitForFences defines the second half of a memory dependency with
the host, for each fence being waited on. The memory dependency defined by
signaling a fence and waiting on the host does not guarantee that the
results of memory accesses will be visible to the host, or that the memory
is available. To provide that guarantee, the application must: insert a
memory barrier between the device writes and the end of the submission
that will signal the fence, with pname:dstAccessMask having the
ename:VK_ACCESS_HOST_READ_BIT bit set, with pname:dstStageMask having the
ename:VK_PIPELINE_STAGE_HOST_BIT bit set, and with the appropriate
pname:srcStageMask and pname:srcAccessMask members set to guarantee
completion of the writes. If the memory was allocated without the
ename:VK_MEMORY_PROPERTY_HOST_COHERENT_BIT set, then
fname:vkInvalidateMappedMemoryRanges must: be called after the fence is
signaled in order to ensure the writes are visible to the host, as described
in <<memory-device-hostaccess,Host Access to Device Memory Objects>>.
include::../validity/protos/vkWaitForFences.txt[]
[[synchronization-semaphores]]
== Semaphores
// refBegin VkSemaphore - Opaque handle to a semaphore object
Semaphores are used to coordinate queue operations both within a queue and
between different queues. A semaphore's status is always either _signaled_
or _unsignaled_.
Semaphores are represented by sname:VkSemaphore handles:
include::../api/handles/VkSemaphore.txt[]
// refEnd VkSemaphore
// refBegin vkCreateSemaphore Create a new queue semaphore object.
To create a new semaphore object, use the command
include::../api/protos/vkCreateSemaphore.txt[]
* pname:device is the logical device that creates the semaphore.
* pname:pCreateInfo points to a slink:VkSemaphoreCreateInfo structure
specifying the state of the semaphore object.
* pname:pAllocator controls host memory allocation as described in the
<<memory-allocation, Memory Allocation>> chapter.
* pname:pSemaphore points to a handle in which the resulting
semaphore object is returned. The semaphore is created in the unsignaled
state.
include::../validity/protos/vkCreateSemaphore.txt[]
// refBegin VkSemaphoreCreateInfo - Structure specifying parameters of a newly created semaphore
The sname:VkSemaphoreCreateInfo structure is defined as:
include::../api/structs/VkSemaphoreCreateInfo.txt[]
* pname:sType is the type of this structure.
* pname:pNext is `NULL` or a pointer to an extension-specific structure.
* pname:flags is reserved for future use.
include::../validity/structs/VkSemaphoreCreateInfo.txt[]
// refBegin vkDestroySemaphore Destroy a semaphore object
To destroy a semaphore, call:
include::../api/protos/vkDestroySemaphore.txt[]
* pname:device is the logical device that destroys the semaphore.
* pname:semaphore is the handle of the semaphore to destroy.
* pname:pAllocator controls host memory allocation as described in the
<<memory-allocation, Memory Allocation>> chapter.
include::../validity/protos/vkDestroySemaphore.txt[]
[[synchronization-semaphores-signaling]]
Semaphores can: be signaled by including them in a batch as part of a
<<devsandqueues-submission, queue submission>> command, defining a
queue operation to signal that semaphore.
This _semaphore signal operation_ defines the first half of a memory
dependency, guaranteeing that all memory accesses defined by the
submitted queue operations in the batch are made available, and that those
queue operations have completed execution.
Semaphore signal operations for flink:vkQueueSubmit additionally include all
queue operations previously submitted via flink:vkQueueSubmit in their half
of a memory dependency, and all batches that are stored at a lower index in
the same pname:pSubmits array.
[[synchronization-semaphores-waiting]]
Signaling of semaphores can: be waited on by similarly including them
in a batch, defining a queue operation to wait for a signal. A semaphore
wait operation defines the second half of a memory dependency for the
semaphores being waited on. This half of the memory dependency guarantees
that the first half has completed execution, and also guarantees that all
available memory accesses are made visible to the queue operations in the
batch.
Semaphore wait operations for flink:vkQueueSubmit additionally include all
queue operations subsequently submitted via flink:vkQueueSubmit in their
half of a memory dependency, and all batches that are stored at a higher
index in the same pname:pSubmits array.
When queue execution reaches a semaphore wait operation, the queue will
stall execution of queue operations in the batch until each semaphore
becomes signaled. Once all semaphores are signaled, the semaphores will be
reset to the unsignaled state, and subsequent queue operations will be
permitted to execute.
Semaphore wait operations defined by flink:vkQueueSubmit only wait at
specific pipeline stages, rather than delaying all of each command buffer's
execution, with the pipeline stages determined by the corresponding element
of the pname:pWaitDstStageMask member of sname:VkSubmitInfo. Execution of
work by those stages in subsequent commands is stalled until the
corresponding semaphore reaches the signaled state.
[NOTE]
.Note
====
A common scenario for using pname:pWaitDstStageMask with values other than
ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT is when synchronizing a window
system presentation operation against subsequent command buffers which
render the next frame. In this case, a presentation image mustnot: be
overwritten until the presentation operation completes, but other pipeline
stages can: execute without waiting. A mask of
ename:VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT prevents subsequent
color attachment writes from executing until the semaphore signals.
Some implementations may: be able to execute transfer operations and/or
vertex processing work before the semaphore is signaled.
If an image layout transition needs to be performed on a swapchain image
before it is used in a framebuffer, that can: be performed as the first
operation submitted to the queue after acquiring the image,
and shouldnot: prevent other work from overlapping with the presentation
operation. For example, a sname:VkImageMemoryBarrier could use:
* pname:srcStageMask = ename:VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
* pname:srcAccessMask = ename:VK_ACCESS_MEMORY_READ_BIT
* pname:dstStageMask = ename:VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT
* pname:dstAccessMask = ename:VK_ACCESS_COLOR_ATTACHMENT_READ_BIT |
ename:VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT.
* pname:oldLayout = etext:VK_IMAGE_LAYOUT_PRESENT_SRC_KHR
* pname:newLayout = ename:VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL
Alternatively, pname:oldLayout can: be ename:VK_IMAGE_LAYOUT_UNDEFINED, if
the image's contents need not be preserved.
This barrier accomplishes a dependency chain between previous presentation
operations and subsequent color attachment output operations, with the
layout transition performed in between, and does not introduce a dependency
between previous work and any vertex processing stages. More precisely, the
semaphore signals after the presentation operation completes, then the
semaphore wait stalls the
ename:VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT stage, then there is a
dependency from that same stage to itself with the layout transition
performed in between.
(The primary use case for this example is with the presentation extensions,
thus the etext:VK_IMAGE_LAYOUT_PRESENT_SRC_KHR token is used even though it
is not defined in the core Vulkan specification.)
====
[[synchronization-events]]
== Events
// refBegin VkEvent - Opaque handle to a event object
Events represent a fine-grained synchronization primitive that can: be used
to gauge progress through a sequence of commands executed on a queue by
Vulkan. An event is initially in the unsignaled state. It can: be
signaled by a device, using commands inserted into the command buffer, or by
the host. It can: also be reset to the unsignaled state by a device or the
host. The host can: query the state of an event. A device can: wait for one
or more events to become signaled.
Events are represented by sname:VkEvent handles:
include::../api/handles/VkEvent.txt[]
// refEnd VkEvent
// refBegin vkCreateEvent Create a new event object.
To create an event, call:
include::../api/protos/vkCreateEvent.txt[]
* pname:device is the logical device that creates the event.
* pname:pCreateInfo is a pointer to an instance of the
sname:VkEventCreateInfo structure which contains information about how
the event is to be created.
* pname:pAllocator controls host memory allocation as described in the
<<memory-allocation, Memory Allocation>> chapter.
* pname:pEvent points to a handle in which the resulting event object is
returned.
When created, the event object is in the unsignaled state.
include::../validity/protos/vkCreateEvent.txt[]
// refBegin VkEventCreateInfo - Structure specifying parameters of a newly created event
The sname:VkEventCreateInfo structure is defined as:
include::../api/structs/VkEventCreateInfo.txt[]
* pname:flags is reserved for future use.
include::../validity/structs/VkEventCreateInfo.txt[]
// refBegin vkDestroyEvent Destroy an event object
To destroy an event, call:
include::../api/protos/vkDestroyEvent.txt[]
* pname:device is the logical device that destroys the event.
* pname:event is the handle of the event to destroy.
* pname:pAllocator controls host memory allocation as described in the
<<memory-allocation, Memory Allocation>> chapter.
include::../validity/protos/vkDestroyEvent.txt[]
// refBegin vkGetEventStatus Retrieve the status of an event object.
To query the state of an event from the host, call:
include::../api/protos/vkGetEventStatus.txt[]
* pname:device is the logical device that owns the event.
* pname:event is the handle of the event to query.
Upon success, fname:vkGetEventStatus returns the state of the event object
with the following return codes:
[width="80%",options="header"]
|=====
| Status | Meaning
| ename:VK_EVENT_SET | The event specified by pname:event is signaled.
| ename:VK_EVENT_RESET | The event specified by pname:event is unsignaled.
|=====
The state of an event can: be updated by the host. The state of the event is
immediately changed, and subsequent calls to fname:vkGetEventStatus will
return the new state. If an event is already in the requested state, then
updating it to the same state has no effect.
include::../validity/protos/vkGetEventStatus.txt[]
// refBegin vkSetEvent Set an event to signaled state.
To set the state of an event to signaled from the host, call:
include::../api/protos/vkSetEvent.txt[]
* pname:device is the logical device that owns the event.
* pname:event is the event to set.
include::../validity/protos/vkSetEvent.txt[]
// refBegin vkResetEvent Reset an event to non-signaled state.
To set the state of an event to unsignaled from the host, call:
include::../api/protos/vkResetEvent.txt[]
* pname:device is the logical device that owns the event.
* pname:event is the event to reset.
include::../validity/protos/vkResetEvent.txt[]
// refBegin vkCmdSetEvent Set an event object to signaled state.
The state of an event can: also be updated on the device by commands
inserted in command buffers. To set the state of an event to signaled from
a device, call:
include::../api/protos/vkCmdSetEvent.txt[]
* pname:commandBuffer is the command buffer into which the command is
recorded.
* pname:event is the event that will be signaled.
* pname:stageMask specifies the pipeline stage at which the state of
pname:event is updated as described below.
include::../validity/protos/vkCmdSetEvent.txt[]
// refBegin vkCmdResetEvent Reset an event object to non-signaled state.
To set the state of an event to unsignaled from a device, call:
include::../api/protos/vkCmdResetEvent.txt[]
* pname:commandBuffer is the command buffer into which the command is
recorded.
* pname:event is the event that will be reset.
* pname:stageMask specifies the pipeline stage at which the state of
pname:event is updated as described below.
include::../validity/protos/vkCmdResetEvent.txt[]
For both fname:vkCmdSetEvent and fname:vkCmdResetEvent, the status of
pname:event is updated once the pipeline stages specified by pname:stageMask
(see <<synchronization-pipeline-stage-flags>>) have completed executing
prior commands. The command modifying the event is passed through the
pipeline bound to the command buffer at time of execution.
// refBegin vkCmdWaitEvents Wait for one or more events and insert a set of memory
To wait for one or more events to enter the signaled state on a device,
call:
include::../api/protos/vkCmdWaitEvents.txt[]
* pname:commandBuffer is the command buffer into which the command is
recorded.
* pname:eventCount is the length of the pname:pEvents array.
* pname:pEvents is an array of event object handles to wait on.
* pname:srcStageMask (see <<synchronization-pipeline-stage-flags>>) is the
bitwise OR of the pipeline stages used to signal the event object
handles in pname:pEvents.
* pname:dstStageMask is the pipeline stages at which the wait will occur.
* pname:pMemoryBarriers is a pointer to an array of
pname:memoryBarrierCount sname:VkMemoryBarrier structures.
* pname:pBufferMemoryBarriers is a pointer to an array of
pname:bufferMemoryBarrierCount sname:VkBufferMemoryBarrier structures.
* pname:pImageMemoryBarriers is a pointer to an array of
pname:imageMemoryBarrierCount sname:VkImageMemoryBarrier structures. See
<<synchronization-memory-barriers>> for more details about memory
barriers.
fname:vkCmdWaitEvents waits for events set by either fname:vkSetEvent or
fname:vkCmdSetEvent to become signaled. Logically, it has three phases:
. Wait at the pipeline stages specified by pname:dstStageMask (see
<<synchronization-pipeline-stage-flags>>) until the pname:eventCount
event objects specified by pname:pEvents become signaled.
Implementations may: wait for each event object to become signaled
in sequence (starting with the first event object in pname:pEvents,
and ending with the last), or wait for all of the event objects to
become signaled at the same time.
. Execute the memory barriers specified by pname:pMemoryBarriers,
pname:pBufferMemoryBarriers and pname:pImageMemoryBarriers (see
<<synchronization-memory-barriers>>).
. Resume execution of pipeline stages specified by pname:dstStageMask
Implementations may: not execute commands in a pipelined manner, so
fname:vkCmdWaitEvents may: not observe the results of a subsequent
fname:vkCmdSetEvent or fname:vkCmdResetEvent command, even if the stages in
pname:dstStageMask occur after the stages in pname:srcStageMask.
Commands that update the state of events in different pipeline stages
may: execute out of order, unless the ordering is enforced by execution
dependencies.
[NOTE]
.Note
====
Applications should: be careful to avoid race conditions when using
events. For example, an event should: only be reset if no
fname:vkCmdWaitEvents command is executing that waits upon that event.
====
include::../validity/protos/vkCmdWaitEvents.txt[]
An act of setting or resetting an event in one queue may: not affect or be
visible to other queues. For cross-queue synchronization, semaphores can: be
used.
[[synchronization-execution-and-memory-dependencies]]
== Execution And Memory Dependencies
Synchronization commands introduce explicit execution and memory
dependencies between two sets of action commands, where the second set of
commands depends on the first set of commands. The two sets can: be:
* First set: commands before a flink:vkCmdSetEvent command.
+
Second set: commands after a flink:vkCmdWaitEvents command in the same
queue, using the same event.
* First set: commands in a lower numbered subpass (or before a render pass
instance).
+
Second set: commands in a higher numbered subpass (or after a render pass
instance), where there is a <<renderpass,subpass dependency>> between the
two subpasses (or between a subpass and ename:VK_SUBPASS_EXTERNAL).
* First set: commands before a
<<synchronization-pipeline-barriers,pipeline barrier>>.
+
Second set: commands after that pipeline barrier in the same queue (possibly
limited to within the same subpass).
An _execution dependency_ is a single dependency between a set of source and
destination pipeline stages, which guarantees that all work performed by the
set of pipeline stages included in pname:srcStageMask (see
<<synchronization-pipeline-stage-flags,Pipeline Stage Flags>>) of the first
set of commands completes before any work performed by the set of pipeline
stages included in pname:dstStageMask of the second set of commands begins.
An _execution dependency chain_ from a set of source pipeline stages
latexmath:[$A$] to a set of destination pipeline stages latexmath:[$B$] is a
sequence of execution dependencies submitted to a queue in order between a
first set of commands and a second set of commands, satisfying the following
conditions:
* the first dependency includes latexmath:[$A$] or
ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT or
ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT in the pname:srcStageMask. And,
* the final dependency includes latexmath:[$B$] or
ename:VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT or
ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT in the pname:dstStageMask. And,
* for each dependency in the sequence (except the first) at least one of
the following conditions is true:
** pname:srcStageMask of the current dependency includes at least one bit
latexmath:[$C$] that is present in the pname:dstStageMask of the
previous dependency. Or,
** pname:srcStageMask of the current dependency includes
ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT or
ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT. Or,
** pname:dstStageMask of the previous dependency includes
ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT or
ename:VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT. Or,
** pname:srcStageMask of the current dependency includes
ename:VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT, and pname:dstStageMask of the
previous dependency includes at least one graphics pipeline stage. Or,
** pname:dstStageMask of the previous dependency includes
ename:VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT, and pname:srcStageMask of the
current dependency includes at least one graphics pipeline stage.
* for each dependency in the sequence (except the first), at least one of the
following conditions is true:
** the current dependency is a fname:vkCmdSetEvent/fname:vkCmdWaitEvents pair
(where the fname:vkCmdWaitEvents may: be inside or outside a render pass
instance), or a fname:vkCmdPipelineBarrier outside of a render pass instance,
or a subpass dependency with pname:srcSubpass equal to
ename:VK_SUBPASS_EXTERNAL for a render pass instance that begins with a
fname:vkCmdBeginRenderPass command, and the previous dependency is any of:
*** a fname:vkCmdSetEvent/fname:vkCmdWaitEvents pair or a
fname:vkCmdPipelineBarrier, either one outside of a render pass instance,
that precedes the current dependency in the queue execution order. Or,
*** a subpass dependency, with pname:dstSubpass equal to
ename:VK_SUBPASS_EXTERNAL, for a renderpass instance that was ended with a
fname:vkCmdEndRenderPass command that precedes the current dependency in
the queue execution order.
** the current dependency is a subpass dependency for a render pass instance,
and the previous dependency is any of:
*** another dependency for the same render pass instance, with a
pname:dstSubpass equal to the pname:srcSubpass of the current dependency.
Or,
*** a fname:vkCmdPipelineBarrier of the same render pass instance, recorded
for the subpass indicated by the pname:srcSubpass of the current
dependency. Or,
*** a fname:vkCmdSetEvent/fname:vkCmdWaitEvents pair, where the
fname:vkCmdWaitEvents is inside the same render pass instance, recorded
for the subpass indicated by the pname:srcSubpass of the current
dependency.
** the current dependency is a fname:vkCmdPipelineBarrier inside a subpass of
a render pass instance, and the previous dependency is any of:
*** a subpass dependency for the same render pass instance, with a
pname:dstSubpass equal to the subpass of the fname:vkCmdPipelineBarrier.
Or,
*** a fname:vkCmdPipelineBarrier of the same render pass instance, recorded
for the same subpass, before the current dependency. Or,
*** a fname:vkCmdSetEvent/fname:vkCmdWaitEvents pair, where the
fname:vkCmdWaitEvents is inside the same render pass instance, recorded
for the same subpass, before the current dependency.
A pair of consecutive execution dependencies in an execution dependency
chain accomplishes a dependency between the stages latexmath:[$A$] and
latexmath:[$B$] via intermediate stages latexmath:[$C$], even if no work is
executed between them that uses the pipeline stages included in
latexmath:[$C$].
An execution dependency chain guarantees that the work performed by the
pipeline stages latexmath:[$A$] in the first set of commands completes
before the work performed by pipeline stages latexmath:[$B$] in the second
set of commands begins.
A command latexmath:[$C_1$] is said to _happen-before_ an execution dependency
latexmath:[$D_2$] for a pipeline stage latexmath:[$S$] if all the following
conditions are true:
* latexmath:[$C_1$] is in the first set of commands for an execution dependency
latexmath:[$D_1$] that includes latexmath:[$S$] in its pname:srcStageMask.
And,
* there exists an execution dependency chain that includes latexmath:[$D_1$]
and latexmath:[$D_2$], where latexmath:[$D_2$] follows latexmath:[$D_1$] in the
execution dependency sequence.
Similarly, a command latexmath:[$C_2$] is said to _happen-after_ an execution
dependency latexmath:[$D_1$] for a pipeline stage latexmath:[$S$] if all the
following conditions are true:
* latexmath:[$C_2$] is in the second set of commands for an execution dependency
latexmath:[$D_2$] that includes latexmath:[$S$] in its pname:dstStageMask.
And,
* there exists an execution dependency chain that includes latexmath:[$D_1$]
and latexmath:[$D_2$], where latexmath:[$D_2$] follows latexmath:[$D_1$] in the
execution dependency sequence.
An execution dependency is _by-region_ if its pname:dependencyFlags
parameter includes ename:VK_DEPENDENCY_BY_REGION_BIT. Such a barrier
describes a per-region (x,y,layer) dependency. That is, for each region, the
implementation must: ensure that the source stages for the first set of
commands complete execution before any destination stages begin execution in
the second set of commands for the same region. Since fragment shader
invocations are not specified to run in any particular groupings, the size
of a region is implementation-dependent, not known to the application, and
must: be assumed to be no larger than a single pixel. If
pname:dependencyFlags does not include ename:VK_DEPENDENCY_BY_REGION_BIT, it
describes a global dependency, that is for all pixel regions, the source
stages must: have completed for preceding commands before any destination
stages starts for subsequent commands.
[[synchronization-execution-and-memory-dependencies-available-and-visible]]
_Memory dependencies_ are coupled to execution dependencies, and synchronize
accesses to memory between two sets of commands. They operate according to two
``halves'' of a dependency to synchronize two sets of commands, the commands
that happen-before the execution dependency for the pname:srcStageMask vs the
commands that happen-after the execution dependency for the pname:dstStageMask,
as described above. The first half of the dependency makes memory accesses using
the set of access types in pname:srcAccessMask performed in pipeline stages in
pname:srcStageMask by the first set of commands complete and writes be
_available_ for subsequent commands. The second half of the dependency makes any
available writes from previous commands _visible_ to pipeline stages in
pname:dstStageMask using the set of access types in pname:dstAccessMask for the
second set of commands, if those writes have been made available with the first
half of the same or a previous dependency. The two halves of a memory dependency
can: either be expressed as part of a single command, or can: be part of
separate barriers as long as there is an execution dependency chain between
them. The application must: use memory dependencies to make writes visible
before subsequent reads can: rely on them, and before subsequent writes can:
overwrite them. Failure to do so causes the result of the reads to be
undefined, and the order of writes to be undefined.
[[synchronization-execution-and-memory-dependencies-types]]
<<synchronization-global-memory-barrier,Global memory barriers>> apply to
all resources owned by the device.
<<synchronization-buffer-memory-barrier,Buffer>> and
<<synchronization-image-memory-barrier,image memory barriers>> apply to the
buffer range(s) or image subresource(s) included in the command. For
accesses to a byte of a buffer or image subresource of an image to be
synchronized between two sets of commands, the byte or image subresource
must: be included in both the first and second halves of the dependencies
described above, but need not be included in each step of the execution
dependency chain between them.
An execution dependency chain is _by-region_ if all stages in all
dependencies in the chain are framebuffer-space pipeline stages, and if the
ename:VK_DEPENDENCY_BY_REGION_BIT bit is included in all dependencies in the
chain. Otherwise, the execution dependency chain is not by-region. The two
halves of a memory dependency form a by-region dependency if *all* execution
dependency chains between them are by-region. In other words, if there is
any execution dependency between two sets of commands that is not by-region,
then the memory dependency is not by-region.
When an image memory barrier includes a layout transition, the barrier first
makes writes via pname:srcStageMask and pname:srcAccessMask available, then
performs the layout transition, then makes the contents of the image
subresource(s) in the new layout visible to memory accesses in
pname:dstStageMask and pname:dstAccessMask, as if there is an execution and
memory dependency between the source masks and the transition, as well as
between the transition and the destination masks. Any writes that have
previously been made available are included in the layout transition, but
any previous writes that have not been made available may: become lost or
corrupt the image.
All dependencies must: include at least one bit in each of the
pname:srcStageMask and pname:dstStageMask.
Memory dependencies are used to solve data hazards, e.g. to ensure that
write operations are visible to subsequent read operations (read-after-write
hazard), as well as write-after-write hazards. Write-after-read and
read-after-read hazards only require execution dependencies to synchronize.
[[synchronization-pipeline-barriers]]
== Pipeline Barriers
A _pipeline barrier_ inserts an execution dependency and a set of memory
dependencies between a set of commands earlier in the command buffer and a
set of commands later in the command buffer.
// refBegin vkCmdPipelineBarrier Insert a set of execution and memory barriers.
To record a pipeline barrier, call:
include::../api/protos/vkCmdPipelineBarrier.txt[]
* pname:commandBuffer is the command buffer into which the command is
recorded.
* pname:srcStageMask is a bitmask of elink:VkPipelineStageFlagBits
specifying a set of source pipeline stages (see
<<synchronization-pipeline-stage-flags>>).
* pname:dstStageMask is a bitmask specifying a set of destination pipeline
stages.
+
The pipeline barrier specifies an execution dependency such that all
work performed by the set of pipeline stages included in
pname:srcStageMask of the first set of commands completes before any
work performed by the set of pipeline stages included in
pname:dstStageMask of the second set of commands begins.
+
* pname:dependencyFlags is a bitmask of elink:VkDependencyFlagBits. The
execution dependency is by-region if the mask includes
ename:VK_DEPENDENCY_BY_REGION_BIT.
* pname:memoryBarrierCount is the length of the pname:pMemoryBarriers
array.
* pname:pMemoryBarriers is a pointer to an array of slink:VkMemoryBarrier
structures.
* pname:bufferMemoryBarrierCount is the length of the
pname:pBufferMemoryBarriers array.
* pname:pBufferMemoryBarriers is a pointer to an array of
slink:VkBufferMemoryBarrier structures.
* pname:imageMemoryBarrierCount is the length of the
pname:pImageMemoryBarriers array.
* pname:pImageMemoryBarriers is a pointer to an array of
slink:VkImageMemoryBarrier structures.
Each element of the pname:pMemoryBarriers, pname:pBufferMemoryBarriers and
pname:pImageMemoryBarriers arrays specifies two halves of a memory
dependency, as defined above. Specifics of each type of memory barrier and
the memory access types are defined further in
<<synchronization-memory-barriers,Memory Barriers>>.
If fname:vkCmdPipelineBarrier is called outside a render pass instance, then
the first set of commands is all prior commands submitted to the queue and
recorded in the command buffer and the second set of commands is all
subsequent commands recorded in the command buffer and submitted to the
queue. If fname:vkCmdPipelineBarrier is called inside a render pass
instance, then the first set of commands is all prior commands in the same
subpass and the second set of commands is all subsequent commands in the
same subpass.
include::../validity/protos/vkCmdPipelineBarrier.txt[]
[[synchronization-pipeline-barriers-subpass-self-dependencies]]
=== Subpass Self-dependency
If fname:vkCmdPipelineBarrier is called inside a render pass instance,
the following restrictions apply. For a given subpass to allow a pipeline
barrier, the render pass must: declare a _self-dependency_ from that subpass
to itself. That is, there must: exist a sname:VkSubpassDependency in the
subpass dependency list for the render pass with pname:srcSubpass and
pname:dstSubpass equal to that subpass index. More than one self-dependency
can: be declared for each subpass. Self-dependencies must: only include
pipeline stage bits that are graphics stages. Self-dependencies mustnot:
have any earlier pipeline stages depend on any later pipeline stages. More
precisely, this means that whatever is the last pipeline stage in
pname:srcStageMask must: be no later than whatever is the first pipeline
stage in pname:dstStageMask (the latest source stage can: be equal to the
earliest destination stage). If the source and destination stage masks both
include framebuffer-space stages, then pname:dependencyFlags must: include
ename:VK_DEPENDENCY_BY_REGION_BIT.
A fname:vkCmdPipelineBarrier command inside a render pass instance must: be
a _subset_ of one of the self-dependencies of the subpass it is used in,
meaning that the stage masks and access masks must: each include only a
subset of the bits of the corresponding mask in that self-dependency. If the
self-dependency has ename:VK_DEPENDENCY_BY_REGION_BIT set, then so must: the
pipeline barrier. Pipeline barriers within a render pass instance can: only
be types sname:VkMemoryBarrier or sname:VkImageMemoryBarrier. If a
sname:VkImageMemoryBarrier is used, the image and image subresource range
specified in the barrier must: be a subset of one of the image views used by
the framebuffer in the current subpass. Additionally, pname:oldLayout must:
be equal to pname:newLayout, and both the pname:srcQueueFamilyIndex and
pname:dstQueueFamilyIndex must: be ename:VK_QUEUE_FAMILY_IGNORED.
[[synchronization-pipeline-stage-flags]]
=== Pipeline Stage Flags
// refBegin VkPipelineStageFlagBits - Bitmask specifying pipeline stages
Several of the event commands, fname:vkCmdPipelineBarrier, and
sname:VkSubpassDependency depend on being able to specify where in the
logical pipeline events can: be signaled, or the source and destination of an
execution dependency. These pipeline stages are specified using a bitmask:
include::../api/enums/VkPipelineStageFlagBits.txt[]
The meaning of each bit is:
* ename:VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT:
Stage of the pipeline where commands are initially received by the
queue.
* ename:VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT:
Stage of the pipeline where Draw/DispatchIndirect data structures are
consumed.
* ename:VK_PIPELINE_STAGE_VERTEX_INPUT_BIT:
Stage of the pipeline where vertex and index buffers are consumed.
* ename:VK_PIPELINE_STAGE_VERTEX_SHADER_BIT:
Vertex shader stage.
* ename:VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT:
Tessellation control shader stage.
* ename:VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT:
Tessellation evaluation shader stage.
* ename:VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT:
Geometry shader stage.
* ename:VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT:
Fragment shader stage.
* ename:VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT:
Stage of the pipeline where early fragment tests (depth and stencil
tests before fragment shading) are performed.
* ename:VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT:
Stage of the pipeline where late fragment tests (depth and stencil tests
after fragment shading) are performed.
* ename:VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT:
Stage of the pipeline after blending where the final color values are
output from the pipeline. This stage also includes resolve operations
that occur at the end of a subpass. Note that this does not necessarily
indicate that the values have been committed to memory.
* [[synchronization-transfer]]ename:VK_PIPELINE_STAGE_TRANSFER_BIT:
Execution of copy commands. This includes the operations resulting from
all transfer commands. The set of transfer commands comprises
fname:vkCmdCopyBuffer, fname:vkCmdCopyImage, fname:vkCmdBlitImage,
fname:vkCmdCopyBufferToImage, fname:vkCmdCopyImageToBuffer,
fname:vkCmdUpdateBuffer, fname:vkCmdFillBuffer,
fname:vkCmdClearColorImage, fname:vkCmdClearDepthStencilImage,
fname:vkCmdResolveImage, and fname:vkCmdCopyQueryPoolResults.
* ename:VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT:
Execution of a compute shader.
* ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT:
Final stage in the pipeline where commands complete execution.
* ename:VK_PIPELINE_STAGE_HOST_BIT:
A pseudo-stage indicating execution on the host of reads/writes of
device memory.
* ename:VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT:
Execution of all graphics pipeline stages.
* ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT:
Execution of all stages supported on the queue.
[NOTE]
.Note
====
The ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT and
ename:VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT differ from
ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT in that they correspond to all
(or all graphics) stages, rather than to a specific stage at the end of the
pipeline. An execution dependency with only
ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT in pname:dstStageMask will not
delay subsequent commands, while including either of the other two bits
will. Similarly, when defining a memory dependency, if the stage mask(s)
refer to all stages, then the indicated access types from all stages will be
made available and/or visible, but using only
ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT would not make any accesses
available and/or visible because this stage does not access memory. The
ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT is useful for accomplishing
memory barriers and layout transitions when the next accesses will be done
in a different queue or by a presentation engine; in these cases subsequent
commands in the same queue do not need to wait, but the barrier or
transition must: complete before semaphores associated with the batch
signal.
====
// refEnd VkPipelineStageFlagBits
[NOTE]
.Note
====
If an implementation is unable to update the state of an event at any
specific stage of the pipeline, it may: instead update the event at any
logically later stage. For example, if an implementation is unable to signal
an event immediately after vertex shader execution is complete, it may:
instead signal the event after color attachment output has completed. In the
limit, an event may: be signaled after all graphics stages complete. If an
implementation is unable to wait on an event at any specific stage of the
pipeline, it may: instead wait on it at any logically earlier stage.
Similarly, if an implementation is unable to implement an execution
dependency at specific stages of the pipeline, it may: implement the
dependency in a way where additional source pipeline stages complete and/or
where additional destination pipeline stages' execution is blocked to
satisfy the dependency.
If an implementation makes such a substitution, it mustnot: affect the
semantics of execution or memory dependencies or image and buffer memory
barriers.
====
Certain pipeline stages are only available on queues that support a
particular set of operations. The following table lists, for each pipeline
stage flag, which queue capability flag must: be supported by the
queue. When multiple flags are enumerated in the second column of the table,
it means that the pipeline stage is supported on the queue if it supports
any of the listed capability flags. For further details on queue
capabilities see <<devsandqueues-physical-device-enumeration,Physical Device
Enumeration>> and <<devsandqueues-queues,Queues>>.
.Supported pipeline stage flags
[width="100%",cols="69%,31%",options="header",align="center"]
|========================================
|Pipeline stage flag | Required queue capability flag
|ename:VK_PIPELINE_STAGE_TOP_OF_PIPE_BIT | None
|ename:VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT | ename:VK_QUEUE_GRAPHICS_BIT or ename:VK_QUEUE_COMPUTE_BIT
|ename:VK_PIPELINE_STAGE_VERTEX_INPUT_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_VERTEX_SHADER_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT | ename:VK_QUEUE_COMPUTE_BIT
|ename:VK_PIPELINE_STAGE_TRANSFER_BIT | ename:VK_QUEUE_GRAPHICS_BIT, ename:VK_QUEUE_COMPUTE_BIT or ename:VK_QUEUE_TRANSFER_BIT
|ename:VK_PIPELINE_STAGE_BOTTOM_OF_PIPE_BIT | None
|ename:VK_PIPELINE_STAGE_HOST_BIT | None
|ename:VK_PIPELINE_STAGE_ALL_GRAPHICS_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_PIPELINE_STAGE_ALL_COMMANDS_BIT | None
|========================================
[[synchronization-memory-barriers]]
=== Memory Barriers
_Memory barriers_ express the two halves of a memory dependency between an
earlier set of memory accesses against a later set of memory accesses.
Vulkan provides three types of memory barriers: global memory, buffer
memory, and image memory.
[[synchronization-global-memory-barrier]]
=== Global Memory Barriers
The global memory barrier type is specified with an instance of the
sname:VkMemoryBarrier structure. This type of barrier applies to memory
accesses involving all memory objects that exist at the time of its
execution.
// refBegin VkMemoryBarrier - Structure specifying a memory barrier
The sname:VkMemoryBarrier structure is defined as:
include::../api/structs/VkMemoryBarrier.txt[]
* pname:sType is the type of this structure.
* pname:pNext is `NULL` or a pointer to an extension-specific structure.
* pname:srcAccessMask is a bitmask of the classes of memory accesses
performed by the first set of commands that will participate in
the dependency.
* pname:dstAccessMask is a bitmask of the classes of memory accesses
performed by the second set of commands that will participate in
the dependency.
pname:srcAccessMask and pname:dstAccessMask, along with pname:srcStageMask
and pname:dstStageMask from flink:vkCmdPipelineBarrier, define the two
halves of a memory dependency and an execution dependency. Memory accesses
using the set of access types in pname:srcAccessMask performed in pipeline
stages in pname:srcStageMask by the first set of commands must: complete and
be available to later commands. The side effects of the first set of
commands will be visible to memory accesses using the set of access types in
pname:dstAccessMask performed in pipeline stages in pname:dstStageMask by
the second set of commands. If the barrier is by-region, these requirements
only apply to invocations within the same framebuffer-space region, for
pipeline stages that perform framebuffer-space work. The execution
dependency guarantees that execution of work by the destination stages of
the second set of commands will not begin until execution of work by the
source stages of the first set of commands has completed.
A common type of memory dependency is to avoid a read-after-write hazard. In
this case, the source access mask and stages will include writes from a
particular stage, and the destination access mask and stages will indicate
how those writes will be read in subsequent commands. However, barriers can:
also express write-after-read dependencies and write-after-write
dependencies, and are even useful to express read-after-read dependencies
across an image layout change.
// refBegin VkAccessFlagBits - Bitmask specifying classes of memory access the will participate in a memory barrier dependency
Bits which can: be set in slink:VkMemoryBarrier::pname:srcAccessMask and
slink:VkMemoryBarrier::pname:dstAccessMask include:
[[synchronization-access-flags]]
include::../api/enums/VkAccessFlagBits.txt[]
* ename:VK_ACCESS_INDIRECT_COMMAND_READ_BIT indicates that the access is
an indirect command structure read as part of an indirect drawing
command.
* ename:VK_ACCESS_INDEX_READ_BIT indicates that the access is an index
buffer read.
* ename:VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT indicates that the access is a
read via the vertex input bindings.
* ename:VK_ACCESS_UNIFORM_READ_BIT indicates that the access is a read via
a uniform buffer or dynamic uniform buffer descriptor.
* ename:VK_ACCESS_INPUT_ATTACHMENT_READ_BIT indicates that the access is a
read via an input attachment descriptor.
* ename:VK_ACCESS_SHADER_READ_BIT indicates that the access is a read from
a shader via any other descriptor type.
* ename:VK_ACCESS_SHADER_WRITE_BIT indicates that the access is a write
or atomic from a shader via the same descriptor types as in
ename:VK_ACCESS_SHADER_READ_BIT.
* ename:VK_ACCESS_COLOR_ATTACHMENT_READ_BIT indicates that the access is a
read via a color attachment.
* ename:VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT indicates that the access is
a write via a color or resolve attachment.
* ename:VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT indicates that the
access is a read via a depth/stencil attachment.
* ename:VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT indicates that the
access is a write via a depth/stencil attachment.
* ename:VK_ACCESS_TRANSFER_READ_BIT indicates that the access is a read
from a transfer (copy, blit, resolve, etc.) operation. For the complete
set of transfer operations, see
<<synchronization-transfer,ename:VK_PIPELINE_STAGE_TRANSFER_BIT>>.
* ename:VK_ACCESS_TRANSFER_WRITE_BIT indicates that the access is a write
from a transfer (copy, blit, resolve, etc.) operation. For the complete
set of transfer operations, see
<<synchronization-transfer,ename:VK_PIPELINE_STAGE_TRANSFER_BIT>>.
* ename:VK_ACCESS_HOST_READ_BIT indicates that the access is a read via
the host.
* ename:VK_ACCESS_HOST_WRITE_BIT indicates that the access is a write via
the host.
* ename:VK_ACCESS_MEMORY_READ_BIT indicates that the access is a read via
a non-specific unit attached to the memory. This unit may: be external
to the Vulkan device or otherwise not part of the core Vulkan pipeline.
When included in pname:dstAccessMask, all writes using access types in
pname:srcAccessMask performed by pipeline stages in pname:srcStageMask
must: be visible in memory.
* ename:VK_ACCESS_MEMORY_WRITE_BIT indicates that the access is a write
via a non-specific unit attached to the memory. This unit may: be
external to the Vulkan device or otherwise not part of the core Vulkan
pipeline. When included in pname:srcAccessMask, all access types in
pname:dstAccessMask from pipeline stages in pname:dstStageMask will
observe the side effects of commands that executed before the barrier.
When included in pname:dstAccessMask all writes using access types in
pname:srcAccessMask performed by pipeline stages in pname:srcStageMask
must: be visible in memory.
Color attachment reads and writes are automatically (without memory or
execution dependencies) coherent and ordered against themselves and each
other for a given sample within a subpass of a render pass instance,
executing in <<fundamentals-queueoperation-apiorder,API order>>. Similarly,
depth/stencil attachment reads and writes are automatically coherent and
ordered against themselves and each other in the same circumstances.
Shader reads and/or writes through two variables (in the same or different
shader invocations) decorated with code:Coherent and which use the same
image view or buffer view are automatically coherent with each other, but
require execution dependencies if a specific order is desired. Similarly,
shader atomic operations are coherent with each other and with code:Coherent
variables. Non-code:Coherent shader memory accesses require memory
dependencies for writes to be available and reads to be visible.
Certain memory access types are only supported on queues that support a
particular set of operations. The following table lists, for each access
flag, which queue capability flag must: be supported by the queue. When
multiple flags are enumerated in the second column of the table it means
that the access type is supported on the queue if it supports any of the
listed capability flags. For further details on queue capabilities see
<<devsandqueues-physical-device-enumeration,Physical Device Enumeration>>
and <<devsandqueues-queues,Queues>>.
.Supported access flags
[width="100%",cols="67%,33%",options="header",align="center"]
|========================================
|Access flag | Required queue capability flag
|ename:VK_ACCESS_INDIRECT_COMMAND_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT or ename:VK_QUEUE_COMPUTE_BIT
|ename:VK_ACCESS_INDEX_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_ACCESS_UNIFORM_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT or ename:VK_QUEUE_COMPUTE_BIT
|ename:VK_ACCESS_INPUT_ATTACHMENT_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_ACCESS_SHADER_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT or ename:VK_QUEUE_COMPUTE_BIT
|ename:VK_ACCESS_SHADER_WRITE_BIT | ename:VK_QUEUE_GRAPHICS_BIT or ename:VK_QUEUE_COMPUTE_BIT
|ename:VK_ACCESS_COLOR_ATTACHMENT_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT | ename:VK_QUEUE_GRAPHICS_BIT
|ename:VK_ACCESS_TRANSFER_READ_BIT | ename:VK_QUEUE_GRAPHICS_BIT, ename:VK_QUEUE_COMPUTE_BIT or ename:VK_QUEUE_TRANSFER_BIT
|ename:VK_ACCESS_TRANSFER_WRITE_BIT | ename:VK_QUEUE_GRAPHICS_BIT, ename:VK_QUEUE_COMPUTE_BIT or ename:VK_QUEUE_TRANSFER_BIT
|ename:VK_ACCESS_HOST_READ_BIT | None
|ename:VK_ACCESS_HOST_WRITE_BIT | None
|ename:VK_ACCESS_MEMORY_READ_BIT | None
|ename:VK_ACCESS_MEMORY_WRITE_BIT | None
|========================================
include::../validity/structs/VkMemoryBarrier.txt[]
[[synchronization-buffer-memory-barrier]]
=== Buffer Memory Barriers
The buffer memory barrier type is specified with an instance of the
sname:VkBufferMemoryBarrier structure. This type of barrier only applies to
memory accesses involving a specific range of the specified buffer object.
That is, a memory dependency formed from a buffer memory barrier is
<<synchronization-execution-and-memory-dependencies-types,scoped>> to the
specified range of the buffer. It is also used to transfer ownership of a
buffer range from one queue family to another, as described in the
<<resources-sharing,Resource Sharing>> section.
// refBegin VkBufferMemoryBarrier Structure specifying the parameters of a buffer memory barrier.
The sname:VkBufferMemoryBarrier structure is defined as:
include::../api/structs/VkBufferMemoryBarrier.txt[]
* pname:sType is the type of this structure.
* pname:pNext is `NULL` or a pointer to an extension-specific structure.
* pname:srcAccessMask is a bitmask of the classes of memory accesses
performed by the first set of commands that will participate in
the dependency.
* pname:dstAccessMask is a bitmask of the classes of memory accesses
performed by the second set of commands that will participate in
the dependency.
* pname:srcQueueFamilyIndex is the queue family that is relinquishing
ownership of the range of pname:buffer to another queue, or
ename:VK_QUEUE_FAMILY_IGNORED if there is no transfer of ownership.
* pname:dstQueueFamilyIndex is the queue family that is acquiring
ownership of the range of pname:buffer from another queue, or
ename:VK_QUEUE_FAMILY_IGNORED if there is no transfer of ownership.
* pname:buffer is a handle to the buffer whose backing memory is affected
by the barrier.
* pname:offset is an offset in bytes into the backing memory for
pname:buffer; this is relative to the base offset as bound to the buffer
(see flink:vkBindBufferMemory).
* pname:size is a size in bytes of the affected area of backing memory for
pname:buffer, or ename:VK_WHOLE_SIZE to use the range from pname:offset
to the end of the buffer.
include::../validity/structs/VkBufferMemoryBarrier.txt[]
[[synchronization-image-memory-barrier]]
=== Image Memory Barriers
The image memory barrier type is specified with an instance of the
sname:VkImageMemoryBarrier structure. This type of barrier only applies to
memory accesses involving a specific image subresource range of the
specified image object. That is, a memory dependency formed from an image
memory barrier is
<<synchronization-execution-and-memory-dependencies-types,scoped>> to the
specified image subresources of the image. It is also used to perform a
layout transition for an image subresource range, or to transfer ownership
of an image subresource range from one queue family to another as described
in the <<resources-sharing,Resource Sharing>> section.
// refBegin VkImageMemoryBarrier Structure specifying the parameters of an image memory barrier.
The sname:VkImageMemoryBarrier structure is defined as:
include::../api/structs/VkImageMemoryBarrier.txt[]
* pname:sType is the type of this structure.
* pname:pNext is `NULL` or a pointer to an extension-specific structure.
* pname:srcAccessMask is a bitmask of the classes of memory accesses
performed by the first set of commands that will participate in
the dependency.
* pname:dstAccessMask is a bitmask of the classes of memory accesses
performed by the second set of commands that will participate in
the dependency.
* pname:oldLayout describes the current layout of the image
subresource(s).
* pname:newLayout describes the new layout of the image subresource(s).
* pname:srcQueueFamilyIndex is the queue family that is relinquishing
ownership of the image subresource(s) to another queue, or
ename:VK_QUEUE_FAMILY_IGNORED if there is no transfer of ownership).
* pname:dstQueueFamilyIndex is the queue family that is acquiring
ownership of the image subresource(s) from another queue, or
ename:VK_QUEUE_FAMILY_IGNORED if there is no transfer of ownership).
* pname:image is a handle to the image whose backing memory is affected by
the barrier.
* pname:subresourceRange describes an area of the backing memory for
pname:image (see <<resources-image-views>> for the description of
sname:VkImageSubresourceRange), as well as the set of image subresources
whose image layouts are modified.
If pname:oldLayout differs from pname:newLayout, a layout transition occurs
as part of the image memory barrier, affecting the data contained in the
region of the image defined by the pname:subresourceRange. If
pname:oldLayout is ename:VK_IMAGE_LAYOUT_UNDEFINED, then the data is
undefined after the layout transition. This may: allow a more efficient
transition, since the data may: be discarded. The layout transition must:
occur after all operations using the old layout are completed and before all
operations using the new layout are started. This is achieved by ensuring
that there is a memory dependency between previous accesses and the layout
transition, as well as between the layout transition and subsequent
accesses, where the layout transition occurs between the two halves of a
memory dependency in an image memory barrier.
Layout transitions that are performed via image memory barriers are
automatically ordered against other layout transitions, including those that
occur as part of a render pass instance.
[NOTE]
.Note
====
See <<resources-image-layouts>> for details on available image layouts
and their usages.
====
include::../validity/structs/VkImageMemoryBarrier.txt[]
[[synchronization-waitidle]]
=== Wait Idle Operations
// refBegin vkQueueWaitIdle Wait for a queue to become idle.
To wait on the host for the completion of outstanding queue operations for a given queue, call:
include::../api/protos/vkQueueWaitIdle.txt[]
* pname:queue is the queue on which to wait.
fname:vkQueueWaitIdle is equivalent to submitting a fence to a queue and
waiting with an infinite timeout for that fence to signal.
include::../validity/protos/vkQueueWaitIdle.txt[]
// refBegin vkDeviceWaitIdle Wait for a device to become idle.
To wait on the host for the completion of outstanding queue operations for all queues on a given logical device, call:
include::../api/protos/vkDeviceWaitIdle.txt[]
* pname:device is the logical device to idle.
fname:vkDeviceWaitIdle is equivalent to calling fname:vkQueueWaitIdle for
all queues owned by pname:device.
include::../validity/protos/vkDeviceWaitIdle.txt[]
[[synchronization-implicit-ordering-hostwrites]]
== Host Write Ordering Guarantees
When submitting batches of command buffers to a queue via
flink:vkQueueSubmit, it is guaranteed that:
* Host writes to mappable device memory that occurred before the call to
fname:vkQueueSubmit are visible to the queue operation resulting from that
submission, if the device memory is coherent or if the memory range was
flushed with flink:vkFlushMappedMemoryRanges.