General methods

Base methods dispatched

Base.setproperty!Function
Base.setproperty!(t::Tensor, prop::Symbol, val)

If the data property is modified, the gradient is set to 0

source
Base.sizeFunction
Base.size(t::Tensor)

Return the size of the tensor's data

source
Base.ndimsFunction
Base.ndims(t::Tensor{Union{Float64,Int64,AbstractArray}})

Return the number of dimensions of the tensor's data

source
Base.lengthFunction
Base.length(t::Tensor{Union{Float64,Int64,AbstractArray}})

Return the length of the tensor's data

source
Base.length(d::DataLoader)

The length of a DataLoader is it's number of batches.

source
Base.iterateFunction
Base.iterate(t::Tensor{Union{Float64,Int64,AbstractArray}})
Base.iterate(t::Tensor{Union{Float64,Int64,AbstractArray}}, state)

Iterate on the tensor's data

source
Base.iterate(d::DataLoader, state=1)

Iterate over the DataLoader. For the first iteration, the list of indices is shuffled (if required). Then, return a subpart of the dataset according to the batch size with this size : (dataSize...,batchSize)

For example, if the input data contains two values for a batch size of 4, this matrix will be returned :

[
    1 0 1 1
    0 1 0 1
]
source
Base.showFunction
Base.show(io::IO, t::Tensor)

String representation of a tensor

source
Base.show(io::IO, d::Dense)

String representation of a Dense layer

source
Base.show(io::IO, s::Sequential)

String representation of a Sequentail

source
Base.show(io::IO, d::Flatten)

String representation of a Flatten layer

source

Methods for the gradient

NNJulia.Autodiff.zero_grad!Function
zero_grad!(t::Tensor)

Set the gradient with respect to this tensor to 0

source
zero_grad!(d::Dense)

Set the gradient with respect to the weights and biases of this layer to 0

source
zero_grad!(s::Sequential)

Set the gradient of every tensors contained in the layers in the sequential to 0

source
zero_grad!(d::Flatten)

Does nothing for a flatten layer

source