I think you would have to write a scraper that puts the comments into a structure, then pull that structure into your AsciiDoc. This way the comments can be internally formatted with AsciiDoc markup, and you can output it in Asciidoctor-generated documents, but you won't need Asciidoctor to read the source files directly.
I would try a system of using one for non-publishing comments and # for ones you wish to publish, or vice versa, or append a ## to the ones that are for docs publishing. As well as those denoted by the # notation. Then your scraper can read the block name (""" or whatever portion is important) and then scrape the keeper comments and all the literal code, arranging them all in a file. The below file has seen most comments tagged as uber_func, non-keeper comments dropped, and non-comment content as text:code
# tag::function__uber_func[]
# tag::function__uber_func_form[]
uber_func(to_uber: str) -> str:
# end::function__uber_func_form[]
# tag::function__uber_func_desc[]
This is an overall description. Delivers some context.
# end::function__uber_func_desc[]
# tag::function__uber_func_body[]
# tag::function__uber_func_text[]
To uber means
# end::function__uber_func_text[]
# tag::function__uber_func_code[]
----
result = to_uber + " IS SOOO " + to_uber + "!!!"
----
# end::function__uber_func_code[]
# tag::function__uber_func_text[]
Function only returns upper case.
# end::function__uber_func_text[]
# tag::function__uber_func_code[]
----
return result.upper()
----
# end::function__uber_func_code[]
# end::function__uber_func[]
I know this looks hideous, but it is super useful to an AsciiDoc template. For instance, use just:
uber_func::
include::includes/api-stuff.adoc[tags="function__uber_func_form"]
+
include::includes/api-stuff.adoc[tags="function__uber_func_desc"]
+
include::includes/api-stuff.adoc[tags="function__uber_func_body"]
This would be even better if you parse it to a data format (like JSON or YAML) and then press it into AsciiDoc template dynamically. But you could maintain something like the above if it was not too massive. At a certain size (20+ such records?) you need an intermediary datasource (an ephemeral data file produced by the scraping), and at a certain larger scale (> 100 code blocks/endpoints?), you likely need a system that specializes in API documentation, such as Doxygen, et al.