came.CGGCNet¶
- class came.CGGCNet(g_or_canonical_etypes: DGLHeteroGraph | Sequence[Tuple[str]], in_dim_dict: Dict[str, int], h_dim: int = 32, h_dim_add: int | None = None, out_dim: int = 32, num_hidden_layers: int = 1, norm: str = 'right', use_weight: bool = True, dropout_feat: float = 0.0, dropout: float = 0.0, negative_slope: float = 0.05, batchnorm_ntypes: List[str] | None = None, layernorm_ntypes: List[str] | None = None, out_bias: bool = False, rel_names_out: Sequence[Tuple[str]] | None = None, share_hidden_weights: bool = False, attn_out: bool = True, kwdict_outgat: Dict = {}, share_layernorm: bool = True, residual: bool = False, **kwds)¶
Cell-Gene-Gene-Cell graph neural network.
Graph Convolutional Network for cell-gene Heterogeneous graph, with edges named as:
(‘cell’, ‘express’, ‘gene’): ov_adj
(‘gene’, ‘expressed_by’, ‘cell’): ov_adj.T
(‘gene’, ‘homolog_with’, ‘gene’): vv_adj + sparse.eye(n_gnodes)
(‘cell’, ‘self_loop_cell’, ‘cell’): sparse.eye(n_cells)
Notes
gene embeddings are computed from cells;
weight sharing across hidden layers is allowed by setting
share_hidden_weightsasTrue.attention can be applied on the last layer (self.cell_classifier);
the graph for the embedding layer and the hidden layers can be different;
allow expression values as static edge weights. (but seems not work…)
- Parameters:
g_or_canonical_etypes (dgl.DGLGraph or a list of 3-length-tuples) – if provide a list of tuples, each of the tuples should be like
(node_type_source, edge_type, node_type_destination).in_dim_dict (Dict[str, int]) – Input dimensions for each node-type
h_dim (int) – number of dimensions of the hidden states
h_dim_add (Optional[int or Tuple]) – if provided, an extra hidden layer will be add before the classifier
out_dim (int) – number of classes (e.g., cell types)
num_hidden_layers (int) – number of hidden layers
norm (str) – normalization method for message aggregation, should be one of {‘none’, ‘both’, ‘right’, ‘left’} (Default: ‘right’)
use_weight (bool) – True if a linear layer is applied after message passing. Default: True
dropout_feat (float) – dropout-rate for the input layer
dropout (float) – dropout-rate for the hidden layer
negative_slope (float) – negative slope for
LeakyReLUbatchnorm_ntypes (List[str]) – specify the node types to apply BatchNorm (Default: None)
layernorm_ntypes (List[str]) – specify the node types to apply
LayerNormout_bias (bool) – whether to use the bias on the output classifier
rel_names_out (a list of tuples or strings) – names of the output relations; if not provided, use all the relations of the graph.
share_hidden_weights (bool) – whether to share the graph-convolutional weights across hidden layers
attn_out (bool) – whether to use attentions on the output layer
kwdict_outgat (Dict) – a dict of key-word parameters for the output graph-attention layers
share_layernorm (bool) – whether to share the LayerNorm across hidden layers
residual (bool) – whether to use the residual connection between the embedding layer and the last hidden layer. This operation may NOT be helpful in transfer-learning scenario. (Default: False)
See also
HiddenRRGCN- __init__(g_or_canonical_etypes: DGLHeteroGraph | Sequence[Tuple[str]], in_dim_dict: Dict[str, int], h_dim: int = 32, h_dim_add: int | None = None, out_dim: int = 32, num_hidden_layers: int = 1, norm: str = 'right', use_weight: bool = True, dropout_feat: float = 0.0, dropout: float = 0.0, negative_slope: float = 0.05, batchnorm_ntypes: List[str] | None = None, layernorm_ntypes: List[str] | None = None, out_bias: bool = False, rel_names_out: Sequence[Tuple[str]] | None = None, share_hidden_weights: bool = False, attn_out: bool = True, kwdict_outgat: Dict = {}, share_layernorm: bool = True, residual: bool = False, **kwds)¶
Initializes internal Module state, shared by both nn.Module and ScriptModule.
Methods
__init__(g_or_canonical_etypes, in_dim_dict)Initializes internal Module state, shared by both nn.Module and ScriptModule.
add_module(name, module)Adds a child module to the current module.
apply(fn)Applies
fnrecursively to every submodule (as returned by.children()) as well as self.bfloat16()Casts all floating point parameters and buffers to
bfloat16datatype.buffers([recurse])Returns an iterator over module buffers.
children()Returns an iterator over immediate children modules.
cpu()Moves all model parameters and buffers to the CPU.
cuda([device])Moves all model parameters and buffers to the GPU.
double()Casts all floating point parameters and buffers to
doubledatatype.eval()Sets the module in evaluation mode.
extra_repr()Set the extra representation of the module
float()Casts all floating point parameters and buffers to
floatdatatype.forward(feat_dict, g, **other_inputs)Defines the computation performed at every call.
get_attentions(feat_dict, g[, fuse])output a cell-by-gene attention matrix
get_buffer(target)Returns the buffer given by
targetif it exists, otherwise throws an error.get_classification_loss(out_cell, labels[, ...])get_extra_state()Returns any extra state to include in the module's state_dict.
get_hidden_states([feat_dict, g, i_layer, ...])access the hidden states.
get_out_logits(feat_dict, g, **other_inputs)get the output logits
get_parameter(target)Returns the parameter given by
targetif it exists, otherwise throws an error.get_sampler(canonical_etypes[, ...])get_submodule(target)Returns the submodule given by
targetif it exists, otherwise throws an error.half()Casts all floating point parameters and buffers to
halfdatatype.ipu([device])Moves all model parameters and buffers to the IPU.
load_state_dict(state_dict[, strict])Copies parameters and buffers from
state_dictinto this module and its descendants.modules()Returns an iterator over all modules in the network.
named_buffers([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
named_children()Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters([recurse])Returns an iterator over module parameters.
register_backward_hook(hook)Registers a backward hook on the module.
register_buffer(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook(hook)Registers a forward hook on the module.
register_forward_pre_hook(hook)Registers a forward pre-hook on the module.
register_full_backward_hook(hook)Registers a backward hook on the module.
register_load_state_dict_post_hook(hook)Registers a post hook to be run after module's
load_state_dictis called.register_module(name, module)Alias for
add_module().register_parameter(name, param)Adds a parameter to the module.
requires_grad_([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state(state)This function is called from
load_state_dict()to handle any extra state found within the state_dict.share_memory()See
torch.Tensor.share_memory_()state_dict(*args[, destination, prefix, ...])Returns a dictionary containing a whole state of the module.
to(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty(*, device)Moves the parameters and buffers to the specified device without copying storage.
train([mode])Sets the module in training mode.
type(dst_type)Casts all parameters and buffers to
dst_type.xpu([device])Moves all model parameters and buffers to the XPU.
zero_grad([set_to_none])Sets gradients of all model parameters to zero.
Attributes
T_destinationdump_patches