mirror of
https://github.com/status-im/Vulkan-Docs.git
synced 2025-02-18 09:16:51 +00:00
* Vulkan 1.1 initial release. Bump API patch number and header version number to 70 for this update. The patch number will be used for both Vulkan 1.1 and Vulkan 1.0 updates, and continues to increment continuously from the previous Vulkan 1.0.69 update. NOTE: We are not publishing an updated 1.0.70 specification, or 1.1 reference pages, along with 1.1.70. There are still minor issues to work out with those build targets. However, we will soon generate all three types of documents as part of the regular spec update cycle. NOTE: The GitHub KhronosGroup/Vulkan-Docs repository now maintains the current specification in the `master` branch. The `1.0` branch is out of date and will not be maintained, since we will be generating both 1.1 and 1.0 specifications from the `master` branch in the future. Github Issues: * Clarify how mapped memory ranges are flushed in flink:vkFlushMappedMemoryRanges (public issue 127). * Specify that <<synchronization-pipeline-stages, Pipeline Stages>> are a list of tasks that each command performs, rather than necessarily being discrete pieces of hardware that one task flows through. Add a "`synchronization command`" pipeline type which all synchronization command execute (it's just TOP + BOTTOM), with an explanatory note (public issue 554). Internal Issues: * Regenerate all images used in the spec in Inkscape with a consistent look-and-feel, and adjust image size attributes so they're all legible, and not too large with respect to the spec body text (internal issue 701). * Document in the <<extensions,extensions>> appendix and in the style guide that `KHX` extensions are no longer supported or used in the Vulkan 1.1 timeframe (internal issue 714). * Remove the leftover equations_temp directory after PDF build completes (internal issue 925). * Update the <<credits, Credits (Informative)>> appendix to include contributors to Vulkan 1.1, and to list them according to the API version(s) they contributed to (internal issue 987). * Add a NOTE to the introduction explaining that interfaces defined by extensions which were promoted to Vulkan 1.1 are now expressed as aliases of the Vulkan 1.1 type (internal issue 991). * Instrument spec source conditionals so spec can be built with 1.1 features, extensions promoted to 1.1, or both (internal issues 992, 998). * Modify the XML schema and tools to support explicit aliasing of types, structures, and commands, and use this to express the promotion of 1.0 extensions to 1.1 core features, by making the extension interfaces aliases of the core features they were promoted to. Mark up promoted interfaces to allow still generating 1.0 + extension specifications (internal issue 991). * Platform names, along with corresponding preprocessor symbols to enable extensions specific to those platforms, are now reserved in vk.xml using the <platform> tag. Update the registry schema and schema specification to match (internal issue 1011). * Updated the <<textures-texel-replacement, Texel Replacement>> section to clarify that reads from invalid texels for image resources result in undefined values (internal issue 1014). * Modify description of patch version so it continues to increment across minor version changes (internal issue 1033). * Clarify and unify language describing physical device-level core and extension functionality in the <<fundamentals-validusage-extensions, Valid Usage for Extensions>>, <<fundamentals-validusage-versions, Valid Usage for Newer Core Versions>>, <<initialization-functionpointers Command Function Pointers>>, <<initialization-phys-dev-extensions, Extending Physical Device From Device Extensions>> <<extended-functionality-instance-extensions-and-devices, Instance Extensions and Device Extensions>> sections and for flink:vkGetPhysicalDeviceImageFormatProperties2. This documents that instance-level functionality is tied to the loader, and independent of the ICD; physical device-level functionality is tied to the ICD, and associated with device extensions; physical devices are treated more uniformly between core and extensions; and instance and physical versions can be different (internal issue 1048). * Updated the <<commandbuffers-lifecycle, Command Buffer Lifecycle>> section to clarify the ability for pending command buffers to transition to the invalid state after submission, and add a command buffer lifecycle diagram (internal issue 1050). * Clarify that some flink:VkDescriptorUpdateTemplateCreateInfo parameters are ignored when push descriptors are not supported (internal issue 1054). * Specify that flink:vkCreateImage will return an error if the image is too large, in a NOTE in the slink:VkImageFormatProperties description (internal issue 1078). * Remove near-duplicate NOTEs about when to query function pointers dynamically in the <<initialization, Initialization>> chapter and replace by extending the NOTE in the <<fundamentals-abi, Application Binary Interface>> section (internal issue 1085). * Restore missing references to "`Sparse Resource Features`" in the flink:VkBufferCreateFlagBits description (internal issue 1086). * Tidy up definitions of descriptor types in the `GL_KHR_vulkan_glsl` specification, the <<descriptorsets, Resource Descriptors>> section and its subsections, and the <<interfaces-resources-descset, Descriptor Set Interface>> for consistency, reduction of duplicate information, and removal of GLSL correspondance/examples (internal issue 1090). * Correctly describe code:PrimitiveId as an Input for tessellation control and evaluation shaders, not an Output (internal issue 1109). * Relax the requirements on chroma offsets for nearest filtering in <<textures-implict-reconstruction, Implicit Reconstruction>> (internal issue 1116). Other Issues: * Clarify the intended relationship between specification language and certain terms defined in the Khronos Intellectual Property Rights policy. Specific changes include: ** Rewrote IP/Copyright preamble and introduction to better agree with normative language both as laid out in the introduction, and the Khronos IPR policy. ** Added notion of fully informative sections, which are now tagged with "`(Informative)`" in their titles. ** Removed non-normative uses of the phrase "`not required`" ** Clarified the distinction between terms "`optional`" and "`not required:`" as they relate to the IPR Policy, and updated specification text to use terms consistent with the intent. ** Reduced additions to RFC 2119, and ensured the specification agreed with the leaner language. ** Removed the terms "`hardware`", "`software`", "`CPU`", and "`GPU`" from normative text. ** Moved several paragraphs that should not have been normative to informative notes. ** Clarified a number of definitions in the Glossary. ** Updated the document writing guide to match new terminology changes. * Explicitly state in the <<fundamentals-objectmodel-lifetime-acquire, application memory lifetime>> language that that for objects other than descriptor sets, a slink:VkDescriptorSetLayout object used in the creation of another object (such as slink:VkPipelineLayout or slink:VkDescriptorUpdateTemplateKHR) is only in use during the creation of that object and can be safely destroyed afterwards. * Updated the <<textures-scale-factor, Scale Factor Operation>> section to use the ratio of anisotropy, rather than the integer sample rate, to perform the LOD calculation. The spec still allows use of the sample rate as the value used to calculate the LOD, but no longer requires it. * Update `vulkan_ext.c` to include all platform-related definitions from the Vulkan platform headers, following the split of the headers into platform-specific and non-platform-specific files. * Fix bogus anchor name in the <<commandbuffers, Command Buffers>> chapter which accidentally duplicated an anchor in the pipelines chapter. There were no reference to this anchor, fortunately. * Add valid usage statement for slink:VkWriteDescriptorSet and slink:VkCopyDescriptorSet requiring that the slink:VkDescriptorSetLayout used to allocate the source and destination sets must not have been destroyed at the time flink:vkUpdateDescriptorSets is called. * Document mapping of subgroup barrier functions to SPIR-V, and clarify a place where subgroupBarrier sounds like it's execution-only in the standalone `GL_KHR_shader_subgroup` specification. * Use an HTML stylesheet derived from the Asciidoctor `colony` theme, with the default Arial font family replaced by the sans-serif Noto font family. * Numerous minor updates to README.adoc, build scripts, Makefiles, and registry and style guide specifications to support Vulkan 1.1 outputs, use them as defaults, and remove mention of `KHX` extensions, which are no longer supported. New Extensions: * `VK_EXT_vertex_attrib_divisor`
583 lines
19 KiB
Python
Executable File
583 lines
19 KiB
Python
Executable File
#!/usr/bin/python3
|
|
#
|
|
# Copyright (c) 2016-2018 The Khronos Group Inc.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
|
|
# genRef.py - create Vulkan ref pages from spec source files
|
|
#
|
|
#
|
|
# Usage: genRef.py files
|
|
|
|
from reflib import *
|
|
from vkapi import *
|
|
import argparse, copy, io, os, pdb, re, string, sys
|
|
|
|
# Return True if name is a Vulkan extension name (ends with an upper-case
|
|
# author ID). This assumes that author IDs are at least two characters.
|
|
def isextension(name):
|
|
return name[-2:].isalpha() and name[-2:].isupper()
|
|
|
|
# Print Khronos CC-BY copyright notice on open file fp. If comment is
|
|
# True, print as an asciidoc comment block, which copyrights the source
|
|
# file. Otherwise print as an asciidoc include of the copyright in markup,
|
|
# which copyrights the outputs. Also include some asciidoc boilerplate
|
|
# needed by all the standalone ref pages.
|
|
|
|
def printCopyrightSourceComments(fp):
|
|
print('// Copyright (c) 2014-2018 Khronos Group. This work is licensed under a', file=fp)
|
|
print('// Creative Commons Attribution 4.0 International License; see', file=fp)
|
|
print('// http://creativecommons.org/licenses/by/4.0/', file=fp)
|
|
print('', file=fp)
|
|
|
|
def printFooter(fp):
|
|
print('include::footer.txt[]', file=fp)
|
|
print('', file=fp)
|
|
|
|
|
|
# Add a spec asciidoc macro prefix to a Vulkan name, depending on its type
|
|
# (protos, structs, enums, etc.)
|
|
def macroPrefix(name):
|
|
if name in basetypes.keys():
|
|
return 'basetype:' + name
|
|
elif name in defines.keys():
|
|
return 'slink:' + name
|
|
elif name in enums.keys():
|
|
return 'elink:' + name
|
|
elif name in flags.keys():
|
|
return 'elink:' + name
|
|
elif name in funcpointers.keys():
|
|
return 'tlink:' + name
|
|
elif name in handles.keys():
|
|
return 'slink:' + name
|
|
elif name in protos.keys():
|
|
return 'flink:' + name
|
|
elif name in structs.keys():
|
|
return 'slink:' + name
|
|
elif name == 'TBD':
|
|
return 'No cross-references are available'
|
|
else:
|
|
return 'UNKNOWN:' + name
|
|
|
|
# Return an asciidoc string with a list of 'See Also' references for the
|
|
# Vulkan entity 'name', based on the relationship mapping in vkapi.py and
|
|
# the additional references in explicitRefs. If no relationships are
|
|
# available, return None.
|
|
def seeAlsoList(apiName, explicitRefs = None):
|
|
refs = {}
|
|
|
|
# Add all the implicit references to refs
|
|
if apiName in mapDict.keys():
|
|
for name in sorted(mapDict[apiName].keys()):
|
|
refs[name] = None
|
|
|
|
# Add all the explicit references
|
|
if explicitRefs != None:
|
|
for name in explicitRefs.split():
|
|
refs[name] = None
|
|
|
|
names = [macroPrefix(name) for name in sorted(refs.keys())]
|
|
if len(names) > 0:
|
|
return ', '.join(names) + '\n'
|
|
else:
|
|
return None
|
|
|
|
# Remap include directives in a list of lines so they can be extracted to a
|
|
# different directory. Returns remapped lines.
|
|
#
|
|
# lines - text to remap
|
|
# baseDir - target directory
|
|
# specDir - source directory
|
|
def remapIncludes(lines, baseDir, specDir):
|
|
# This should be compiled only once
|
|
includePat = re.compile('^include::(?P<path>.*)\[\]')
|
|
|
|
newLines = []
|
|
for line in lines:
|
|
matches = includePat.search(line)
|
|
if matches != None:
|
|
path = matches.group('path')
|
|
|
|
# Relative path to include file from here
|
|
incPath = specDir + '/' + path
|
|
# Remap to be relative to baseDir
|
|
newPath = os.path.relpath(incPath, baseDir)
|
|
newLine = 'include::' + newPath + '[]\n'
|
|
logDiag('remapIncludes: remapping', line, '->', newLine)
|
|
newLines.append(newLine)
|
|
else:
|
|
newLines.append(line)
|
|
return newLines
|
|
|
|
# Generate header of a reference page
|
|
# pageName - string name of the page
|
|
# pageDesc - string short description of the page
|
|
# specText - string that goes in the "C Specification" section
|
|
# fieldName - string heading an additional section following specText, if not None
|
|
# fieldText - string that goes in the additional section
|
|
# descText - string that goes in the "Description" section
|
|
# fp - file to write to
|
|
def refPageHead(pageName, pageDesc, specText, fieldName, fieldText, descText, fp):
|
|
printCopyrightSourceComments(fp)
|
|
|
|
print(':data-uri:',
|
|
':icons: font',
|
|
'include::../config/attribs.txt[]',
|
|
'',
|
|
sep='\n', file=fp)
|
|
|
|
s = pageName + '(3)'
|
|
print('= ' + s,
|
|
'',
|
|
sep='\n', file=fp)
|
|
|
|
if pageDesc.strip() == '':
|
|
pageDesc = 'NO SHORT DESCRIPTION PROVIDED'
|
|
logWarn('refPageHead: no short description provided for', pageName)
|
|
|
|
print('== Name',
|
|
pageName + ' - ' + pageDesc,
|
|
'',
|
|
sep='\n', file=fp)
|
|
|
|
print('== C Specification',
|
|
'',
|
|
specText,
|
|
'',
|
|
sep='\n', file=fp)
|
|
|
|
if fieldName != None:
|
|
print('== ' + fieldName,
|
|
'',
|
|
fieldText,
|
|
sep='\n', file=fp)
|
|
|
|
print('== Description',
|
|
'',
|
|
descText,
|
|
'',
|
|
sep='\n', file=fp)
|
|
|
|
def refPageTail(pageName, seeAlso, fp, auto = False):
|
|
# This is difficult to get working properly in asciidoc
|
|
# specURL = 'link:{vkspecpath}/vkspec.html'
|
|
|
|
# This needs to have the current repository branch path installed in
|
|
# place of '1.0'
|
|
specURL = 'https://www.khronos.org/registry/vulkan/specs/1.0/html/vkspec.html'
|
|
|
|
if seeAlso == None:
|
|
seeAlso = 'No cross-references are available\n'
|
|
|
|
notes = [
|
|
'For more information, see the Vulkan Specification at URL',
|
|
'',
|
|
specURL + '#' + pageName,
|
|
'',
|
|
]
|
|
|
|
if auto:
|
|
notes.extend([
|
|
'This page is a generated document.',
|
|
'Fixes and changes should be made to the generator scripts, '
|
|
'not directly.',
|
|
])
|
|
else:
|
|
notes.extend([
|
|
'This page is extracted from the Vulkan Specification. ',
|
|
'Fixes and changes should be made to the Specification, '
|
|
'not directly.',
|
|
])
|
|
|
|
print('== See Also',
|
|
'',
|
|
seeAlso,
|
|
'',
|
|
sep='\n', file=fp)
|
|
|
|
print('== Document Notes',
|
|
'',
|
|
'\n'.join(notes),
|
|
'',
|
|
sep='\n', file=fp)
|
|
|
|
printFooter(fp)
|
|
|
|
# Extract a single reference page into baseDir
|
|
# baseDir - base directory to emit page into
|
|
# specDir - directory extracted page source came from
|
|
# pi - pageInfo for this page relative to file
|
|
# file - list of strings making up the file, indexed by pi
|
|
def emitPage(baseDir, specDir, pi, file):
|
|
pageName = baseDir + '/' + pi.name + '.txt'
|
|
|
|
# Add a dictionary entry for this page
|
|
global genDict
|
|
genDict[pi.name] = None
|
|
logDiag('emitPage:', pageName)
|
|
|
|
# Short description
|
|
if pi.desc == None:
|
|
pi.desc = '(no short description available)'
|
|
|
|
# Not sure how this happens yet
|
|
if pi.include == None:
|
|
logWarn('emitPage:', pageName, 'INCLUDE == None, no page generated')
|
|
return
|
|
|
|
# Specification text
|
|
lines = remapIncludes(file[pi.begin:pi.include+1], baseDir, specDir)
|
|
specText = ''.join(lines)
|
|
|
|
# Member/parameter list, if there is one
|
|
field = None
|
|
fieldText = None
|
|
if pi.param != None:
|
|
if pi.type == 'structs':
|
|
field = 'Members'
|
|
elif pi.type in ['protos', 'funcpointers']:
|
|
field = 'Parameters'
|
|
else:
|
|
logWarn('emitPage: unknown field type:', pi.type,
|
|
'for', pi.name)
|
|
lines = remapIncludes(file[pi.param:pi.body], baseDir, specDir)
|
|
fieldText = ''.join(lines)
|
|
|
|
# Description text
|
|
lines = remapIncludes(file[pi.body:pi.end+1], baseDir, specDir)
|
|
descText = ''.join(lines)
|
|
|
|
# Substitute xrefs to point at the main spec
|
|
specLinksPattern = re.compile(r'<<([^>,]+)[,]?[ \t\n]*([^>,]*)>>')
|
|
specLinksSubstitute = r"link:{html_spec_relative}#\1[\2]"
|
|
specText, n = specLinksPattern.subn(specLinksSubstitute, specText)
|
|
if fieldText != None:
|
|
fieldText, n = specLinksPattern.subn(specLinksSubstitute, fieldText)
|
|
descText, n = specLinksPattern.subn(specLinksSubstitute, descText)
|
|
|
|
fp = open(pageName, 'w', encoding='utf-8')
|
|
refPageHead(pi.name,
|
|
pi.desc,
|
|
specText,
|
|
field, fieldText,
|
|
descText,
|
|
fp)
|
|
refPageTail(pi.name, seeAlsoList(pi.name, pi.refs), fp, auto = False)
|
|
fp.close()
|
|
|
|
# Autogenerate a single reference page in baseDir
|
|
# Script only knows how to do this for /enums/ pages, at present
|
|
# baseDir - base directory to emit page into
|
|
# pi - pageInfo for this page relative to file
|
|
# file - list of strings making up the file, indexed by pi
|
|
def autoGenEnumsPage(baseDir, pi, file):
|
|
pageName = baseDir + '/' + pi.name + '.txt'
|
|
fp = open(pageName, 'w', encoding='utf-8')
|
|
|
|
# Add a dictionary entry for this page
|
|
global genDict
|
|
genDict[pi.name] = None
|
|
logDiag('autoGenEnumsPage:', pageName)
|
|
|
|
# Short description
|
|
if pi.desc == None:
|
|
pi.desc = '(no short description available)'
|
|
|
|
# Description text. Allow for the case where an enum definition
|
|
# is not embedded.
|
|
if not pi.embed:
|
|
embedRef = ''
|
|
else:
|
|
embedRef = ''.join([
|
|
' * The reference page for ',
|
|
macroPrefix(pi.embed),
|
|
', where this interface is defined.\n' ])
|
|
|
|
txt = ''.join([
|
|
'For more information, see:\n\n',
|
|
embedRef,
|
|
' * The See Also section for other reference pages using this type.\n',
|
|
' * The Vulkan Specification.\n' ])
|
|
|
|
refPageHead(pi.name,
|
|
pi.desc,
|
|
''.join(file[pi.begin:pi.include+1]),
|
|
None, None,
|
|
txt,
|
|
fp)
|
|
refPageTail(pi.name, seeAlsoList(pi.name, pi.refs), fp, auto = True)
|
|
fp.close()
|
|
|
|
# Pattern to break apart a Vk*Flags{authorID} name, used in autoGenFlagsPage.
|
|
flagNamePat = re.compile('(?P<name>\w+)Flags(?P<author>[A-Z]*)')
|
|
|
|
# Autogenerate a single reference page in baseDir for a Vk*Flags type
|
|
# baseDir - base directory to emit page into
|
|
# flagName - Vk*Flags name
|
|
def autoGenFlagsPage(baseDir, flagName):
|
|
pageName = baseDir + '/' + flagName + '.txt'
|
|
fp = open(pageName, 'w', encoding='utf-8')
|
|
|
|
# Add a dictionary entry for this page
|
|
global genDict
|
|
genDict[flagName] = None
|
|
logDiag('autoGenFlagsPage:', pageName)
|
|
|
|
# Short description
|
|
matches = flagNamePat.search(flagName)
|
|
if matches != None:
|
|
name = matches.group('name')
|
|
author = matches.group('author')
|
|
logDiag('autoGenFlagsPage: split name into', name, 'Flags', author)
|
|
flagBits = name + 'FlagBits' + author
|
|
desc = 'Bitmask of ' + flagBits
|
|
else:
|
|
logWarn('autoGenFlagsPage:', pageName, 'does not end in "Flags{author ID}". Cannot infer FlagBits type.')
|
|
flagBits = None
|
|
desc = 'Unknown Vulkan flags type'
|
|
|
|
# Description text
|
|
if flagBits != None:
|
|
txt = ''.join([
|
|
'etext:' + flagName,
|
|
' is a mask of zero or more elink:' + flagBits + '.\n',
|
|
'It is used as a member and/or parameter of the structures and commands\n',
|
|
'in the See Also section below.\n' ])
|
|
else:
|
|
txt = ''.join([
|
|
'etext:' + flagName,
|
|
' is an unknown Vulkan type, assumed to be a bitmask.\n' ])
|
|
|
|
refPageHead(flagName,
|
|
desc,
|
|
'include::../api/flags/' + flagName + '.txt[]\n',
|
|
None, None,
|
|
txt,
|
|
fp)
|
|
refPageTail(flagName, seeAlsoList(flagName), fp, auto = True)
|
|
fp.close()
|
|
|
|
# Autogenerate a single handle page in baseDir for a Vk* handle type
|
|
# baseDir - base directory to emit page into
|
|
# handleName - Vk* handle name
|
|
# @@ Need to determine creation function & add handles/ include for the
|
|
# @@ interface in generator.py.
|
|
def autoGenHandlePage(baseDir, handleName):
|
|
pageName = baseDir + '/' + handleName + '.txt'
|
|
fp = open(pageName, 'w', encoding='utf-8')
|
|
|
|
# Add a dictionary entry for this page
|
|
global genDict
|
|
genDict[handleName] = None
|
|
logDiag('autoGenHandlePage:', pageName)
|
|
|
|
# Short description
|
|
desc = 'Vulkan object handle'
|
|
|
|
descText = ''.join([
|
|
'sname:' + handleName,
|
|
' is an object handle type, referring to an object used\n',
|
|
'by the Vulkan implementation. These handles are created or allocated\n',
|
|
'by the vk @@ TBD @@ function, and used by other Vulkan structures\n',
|
|
'and commands in the See Also section below.\n' ])
|
|
|
|
refPageHead(handleName,
|
|
desc,
|
|
'include::../api/handles/' + handleName + '.txt[]\n',
|
|
None, None,
|
|
descText,
|
|
fp)
|
|
refPageTail(handleName, seeAlsoList(handleName), fp, auto = True)
|
|
fp.close()
|
|
|
|
# Extract reference pages from a spec asciidoc source file
|
|
# specFile - filename to extract from
|
|
# baseDir - output directory to generate page in
|
|
#
|
|
def genRef(specFile, baseDir):
|
|
file = loadFile(specFile)
|
|
if file == None:
|
|
return
|
|
|
|
# Save the path to this file for later use in rewriting relative includes
|
|
specDir = os.path.dirname(os.path.abspath(specFile))
|
|
|
|
pageMap = findRefs(file, specFile)
|
|
logDiag(specFile + ': found', len(pageMap.keys()), 'potential pages')
|
|
|
|
sys.stderr.flush()
|
|
|
|
# Fix up references in pageMap
|
|
fixupRefs(pageMap, specFile, file)
|
|
|
|
# Create each page, if possible
|
|
|
|
for name in sorted(pageMap.keys()):
|
|
pi = pageMap[name]
|
|
|
|
printPageInfo(pi, file)
|
|
|
|
if pi.Warning:
|
|
logDiag('genRef:', pi.name + ':', pi.Warning)
|
|
|
|
if pi.extractPage:
|
|
emitPage(baseDir, specDir, pi, file)
|
|
elif pi.type == 'enums':
|
|
autoGenEnumsPage(baseDir, pi, file)
|
|
elif pi.type == 'flags':
|
|
autoGenFlagsPage(baseDir, pi.name)
|
|
else:
|
|
# Don't extract this page
|
|
logWarn('genRef: Cannot extract or autogenerate:', pi.name)
|
|
|
|
# Generate baseDir/apispec.txt, the single-page version of the ref pages.
|
|
# This assumes there's a page for everything in the vkapi.py dictionaries.
|
|
# Extensions (KHR, EXT, etc.) are currently skipped
|
|
def genSinglePageRef(baseDir):
|
|
# Accumulate head of page
|
|
head = io.StringIO()
|
|
|
|
printCopyrightSourceComments(head)
|
|
|
|
print('= Vulkan API Reference Pages',
|
|
':data-uri:',
|
|
':icons: font',
|
|
':doctype: book',
|
|
':numbered!:',
|
|
':max-width: 200',
|
|
':data-uri:',
|
|
':toc2:',
|
|
':toclevels: 2',
|
|
'',
|
|
sep='\n', file=head)
|
|
|
|
print('include::copyright-ccby.txt[]', file=head)
|
|
print('', file=head)
|
|
# Inject the table of contents. Asciidoc really ought to be generating
|
|
# this for us.
|
|
|
|
sections = [
|
|
[ protos, 'protos', 'Vulkan Commands' ],
|
|
[ handles, 'handles', 'Object Handles' ],
|
|
[ structs, 'structs', 'Structures' ],
|
|
[ enums, 'enums', 'Enumerations' ],
|
|
[ flags, 'flags', 'Flags' ],
|
|
[ funcpointers, 'funcpointers', 'Function Pointer Types' ],
|
|
[ basetypes, 'basetypes', 'Vulkan Scalar types' ],
|
|
[ defines, 'defines', 'C Macro Definitions' ] ]
|
|
|
|
# Accumulate body of page
|
|
body = io.StringIO()
|
|
|
|
for (apiDict,label,title) in sections:
|
|
# Add section title/anchor header to body
|
|
anchor = '[[' + label + ',' + title + ']]'
|
|
print(anchor,
|
|
'== ' + title,
|
|
'',
|
|
':leveloffset: 2',
|
|
'',
|
|
sep='\n', file=body)
|
|
|
|
# count = 0
|
|
for refPage in sorted(apiDict.keys()):
|
|
# if count > 3:
|
|
# continue
|
|
# count = count + 1
|
|
|
|
# Add page to body
|
|
if apiDict == defines or not isextension(refPage):
|
|
print('include::' + refPage + '.txt[]', file=body)
|
|
else:
|
|
print('// not including ' + refPage, file=body)
|
|
print('\n' + ':leveloffset: 0' + '\n', file=body)
|
|
|
|
# Write head and body to the output file
|
|
pageName = baseDir + '/apispec.txt'
|
|
fp = open(pageName, 'w', encoding='utf-8')
|
|
|
|
print(head.getvalue(), file=fp, end='')
|
|
print(body.getvalue(), file=fp, end='')
|
|
|
|
head.close()
|
|
body.close()
|
|
fp.close()
|
|
|
|
if __name__ == '__main__':
|
|
global genDict
|
|
genDict = {}
|
|
|
|
parser = argparse.ArgumentParser()
|
|
|
|
parser.add_argument('-diag', action='store', dest='diagFile',
|
|
help='Set the diagnostic file')
|
|
parser.add_argument('-warn', action='store', dest='warnFile',
|
|
help='Set the warning file')
|
|
parser.add_argument('-log', action='store', dest='logFile',
|
|
help='Set the log file for both diagnostics and warnings')
|
|
parser.add_argument('-basedir', action='store', dest='baseDir',
|
|
default='man',
|
|
help='Set the base directory in which pages are generated')
|
|
parser.add_argument('-noauto', action='store_true',
|
|
help='Don\'t generate inferred ref pages automatically')
|
|
parser.add_argument('files', metavar='filename', nargs='*',
|
|
help='a filename to extract ref pages from')
|
|
parser.add_argument('--version', action='version', version='%(prog)s 1.0')
|
|
|
|
results = parser.parse_args()
|
|
|
|
setLogFile(True, True, results.logFile)
|
|
setLogFile(True, False, results.diagFile)
|
|
setLogFile(False, True, results.warnFile)
|
|
|
|
baseDir = results.baseDir
|
|
|
|
for file in results.files:
|
|
genRef(file, baseDir)
|
|
|
|
# Now figure out which pages *weren't* generated from the spec.
|
|
# This relies on the dictionaries of API constructs in vkapi.py.
|
|
|
|
# For Flags (e.g. Vk*Flags types), it's easy to autogenerate pages.
|
|
if not results.noauto:
|
|
# autoGenFlagsPage is no longer needed because they are added to
|
|
# the spec sources now.
|
|
# for page in flags.keys():
|
|
# if not (page in genDict.keys()):
|
|
# autoGenFlagsPage(baseDir, page)
|
|
|
|
# autoGenHandlePage is no longer needed because they are added to
|
|
# the spec sources now.
|
|
# for page in structs.keys():
|
|
# if typeCategory[page] == 'handle':
|
|
# autoGenHandlePage(baseDir, page)
|
|
|
|
sections = [
|
|
[ flags, 'Flag Types' ],
|
|
[ enums, 'Enumerated Types' ],
|
|
[ structs, 'Structures' ],
|
|
[ protos, 'Prototypes' ],
|
|
[ funcpointers, 'Function Pointers' ],
|
|
[ basetypes, 'Vulkan Scalar Types' ] ]
|
|
|
|
for (apiDict,title) in sections:
|
|
flagged = False
|
|
for page in apiDict.keys():
|
|
if not (page in genDict.keys()):
|
|
if not flagged:
|
|
logWarn(title, 'with no ref page generated:')
|
|
flagged = True
|
|
logWarn(' ', page)
|
|
|
|
genSinglePageRef(baseDir)
|