Buffers

BuffersModule
Buffers module

This module contains functions to handle buffers.

The Buffer object is used to store data of type T with an offset, while the ThreadsBuffer object is used to store data of type T with an offset for each thread.

The buffers are used to store data in a contiguous memory block and to avoid memory allocation in loops. The buffers can be used with alloc! to allocate tensors of given dimensions, drop! to drop tensors from the buffer, and reset! to reset the buffer to the initial state.

Alternativelly, the buffers can be reshaped with reshape_buf! to use the same memory block for different tensors or to allocate tensors with a specific offset.

The size of the buffer can be extended if necessary, and the buffer can be set to be extendable (default) or not at construction with Buffer or later with set_extendable!.

In any case, the ::ThreadsBuffer buffers should be released after use with Buffers.release! or reset!.

If some functions complain about tensors being aliases or if the tensors will be used in C, the neuralyze function can be used to wipe the memory about the origin of the tensor. Do not use this function if the size of the tensor might be changed in between, i.e., neuralyze the tensor only after all necessary allocations are done.

source

Exported functions

Buffers.BufferType
Buffer{T}

Buffer object to store data of type T with an offset.

The buffer allocates an extra element at the beginning which is used to check if the buffer can be extended and to ensure that pointers to the allocated arrays will never point to the same memory as the buffer.

If the buffer is used with reshape_buf!, the offset is set to zero.

source
Buffers.ThreadsBufferType
ThreadsBuffer{T}

Buffer object to store data of type T for each thread.

By default, the buffer is created for nthreads() threads, i.e., each thread has its own buffer Buffer.

Create the buffer with ThreadsBuffer{T}(len, nbuf=Threads.nthreads()) and use it with alloc!, drop!, reset!, etc.

Warning

Always reset! or Buffers.release! the buffer after use!

Example

julia> buf = Buffer(10000)
julia> C = alloc!(buf, 10, 10, 20) # 10x10x20 destination tensor on a single thread
julia> tbuf = ThreadsBuffer(1000) # 1000 elements buffer for nthreads() threads each
julia> Threads.@threads for k = 1:20
          A = alloc!(tbuf, 10, 10) # 10x10 tensor
          B = alloc!(tbuf, 10, 10) # 10x10 tensor
          rand!(A)
          rand!(B)
          @tensor C[:,:,k][i,j] = A[i,l] * B[l,j]
          reset!(tbuf)
        end
source
Buffers.alloc!Method
alloc!(buf, dims...; extend=true)

Allocate tensor of given dimensions in buffer buf.

The tensor is allocated in the buffer starting at the current offset. The offset is increased by the length of the tensor. If extend=true, the buffer is extended if necessary. For ThreadsBuffer, the tensor is allocated in the buffer of the current thread.

Return the allocated tensor.

julia> buf = Buffer(100000)
julia> A = alloc!(buf, 10, 10, 20) # 10x10x20 tensor
julia> B = alloc!(buf, 10, 10, 10) # 10x10x10 tensor starting after A
julia> C = alloc!(buf, 10, 20) # 10x20 tensor starting after B
julia> rand!(B)
julia> rand!(C)
julia> An = neuralyze(A) # tensor without origin
julia> @tensor An[i,j,k] = B[i,j,l] * C[l,k]
source
Buffers.drop!Method
drop!(buf, tensor...)

Drop tensor(s) from buffer buf.

Only last tensors can be dropped. For ThreadsBuffer, drop tensors from the buffer of the current thread.

source
Buffers.nbuffersMethod
nbuffers(buf::ThreadsBuffer)

Return the number of buffers in buf::ThreadsBuffer.

source
Buffers.neuralyzeMethod
neuralyze(tensor::AbstractArray)

Wipe the memory about origin of tensor.

tensor is a (contiguous!) array that is a (possibly reshaped) view of a larger array. Return the same tensor pointing to the same memory, but without the information about the origin. To be used together with alloc! or reshape_buf! to trick Base.mightalias.

Warning

Note that this function is unsafe and should be used with caution! If too much memory is wiped, Julia might garbage-collect the original array and the tensor will point to invalid memory. Also don't use this function if the buffer-size might change in between.

Tip

One can use GC.@preserve to prevent the garbage collection of the original array (however, this shouldn't be necessary).

Example

julia> buf = Buffer(100000)
julia> A = alloc(buf, 10, 10, 20) # 10x10x20 tensor
julia> B = alloc(buf, 10, 10, 10) # 10x10x10 tensor starting after A
julia> C = alloc(buf, 10, 20) # 10x20 tensor starting after B
julia> rand!(B)
julia> rand!(C)
julia> An = neuralyze(A) # tensor without origin but pointing to the same memory
julia> @tensor An[i,j,k] = B[i,j,l] * C[l,k]
source
Buffers.pseudo_alloc!Method
pseudo_alloc!(buf, dims...)

Pseudo allocation function to calculate length for buffer.

The function is used in combination with @print_buffer_usage.

source
Buffers.pseudo_drop!Method
pseudo_drop!(buf, lens...)

Pseudo drop function to calculate length for buffer.

The function is used in combination with @print_buffer_usage.

source
Buffers.pseudo_reset!Method
pseudo_reset!(buf)

Pseudo reset function to calculate length for buffer.

The function is used in combination with @print_buffer_usage.

source
Buffers.repair!Method
repair!(buf::ThreadsBuffer)

Repair ThreadsBuffer buf by releasing all buffers and resetting the pool.

This function should be used after the threaded loop if the buffers were not released properly.

source
Buffers.reset!Method
reset!(buf)

Reset buffer buf to the initial state. For ThreadsBuffer, reset the buffer of the current thread and release it.

source
Buffers.reshape_buf!Method
reshape_buf!(buf, dims...; offset=0, extend=true)

Reshape (part of) a buffer to given dimensions (without copying), using offset.

For ThreadsBuffer, reshape the buffer of the current thread. Call reset!(::ThreadsBuffer) or release! after use.

It can be used, e.g., for itermediates in tensor contractions.

Warning

Do not use this function together with alloc! or drop! on the same buffer!

Example

julia> buf = Buffer(100000)
julia> A = reshape_buf!(buf, 10, 10, 20) # 10x10x20 tensor
julia> B = reshape_buf!(buf, 10, 10, 10, offset=2000) # 10x10x10 tensor starting at 2001
julia> B .= rand(10,10,10)
julia> C = rand(10,20)
julia> @tensor A[i,j,k] = B[i,j,l] * C[l,k]
source
Buffers.usedFunction
used(buf)

Return the number of elements used in buffer buf.

If the buffer is used with reshape_buf!, -1 is returned.

For ThreadsBuffer, return the number of elements used in the buffer of the current thread.

source
Buffers.with_bufferMethod
with_buffer(f::Function, buf::ThreadsBuffer)

Execute function f with buffer buf.

The buffer is released after the function is executed.

Example

julia> buf = Buffer(10000)
julia> C = alloc!(buf, 10, 10, 20) # 10x10x20 destination tensor on a single thread
julia> tbuf = ThreadsBuffer(1000)
julia> Threads.@threads for k = 1:20
          with_buffer(tbuf) do bu
            A = alloc!(bu, 10, 10) # 10x10 tensor
            B = alloc!(bu, 10, 10) # 10x10 tensor
            rand!(A)
            rand!(B)
            @tensor C[:,:,k][i,j] = A[i,l] * B[l,j]
          end
        end
source
Buffers.@print_buffer_usageMacro
@print_buffer_usage(buf, ex)

Print buffer buf usage in expression ex.

The macro generates a body of a function that calculates the length of buffer buf in expression ex. It is possible to use the macro with multiple buffers, e.g., @print_buffer_usage buf1 buf2 begin ... end.

All function calls with buf as an argument are replaced with pseudo_<function> calls. pseudo_alloc!,pseudo_drop!, and pseudo_reset! functions are pre-defined, custom pseudo_functions can be defined if necessary.

Example

```julia buf = Buffer(100000) @printbufferusage buf begin if true A = alloc!(buf, 10, 10, 20) else A = alloc!(buf, 10, 10, 30) end B = alloc!(buf, 10, 10, 10) if true C = alloc!(buf, 10, 20) else C = alloc!(buf, 10, 30) end rand!(B) rand!(C) An = neuralyze(A) @tensor An[i,j,k] = B[i,j,l] * C[l,k] drop!(buf, B, C) reset!(buf) end

source

Internal functions

Buffers._buffer_usageMethod
_buffer_usage(ex, bufs)

Allocations and deallocations together with corresponding ifs in expression ex.

source
Buffers.current_bufferMethod
current_buffer(buf::ThreadsBuffer{T})

Return the buffer of the current thread.

If the buffer is not available, wait until it is released.

source
Buffers.current_buffer_indexMethod
current_buffer_index(buf::ThreadsBuffer)

Return the index of the buffer of the current thread.

If the buffer is not available, wait until it is released.

source
Buffers.iscontiguous_tensorMethod
iscontiguous_tensor(tensor::AbstractArray)

Check if tensor is contiguous.

Return true if tensor is a Vector or a SubArray that is contiguous.

source