README.md 12.4 KB
Newer Older
rictjo's avatar
rictjo 已提交
1
# A Statistical Learning library for Humans
rictjo's avatar
rictjo 已提交
2
Decomposes a set of expressions into a group expression. The toolkit currently offers enrichment analysis, hierarchical enrichment analysis, PLS regression, Shape alignment or clustering as well as  rudimentary factor analysis.
rictjo's avatar
init  
rictjo 已提交
3

rictjo's avatar
rictjo 已提交
4
The expression regulation can be studied via a statistical test that relates it to the observables in the journal file. The final p values are then FDR corrected and the resulting adjusted p values are produced.
rictjo's avatar
init  
rictjo 已提交
5

rictjo's avatar
rictjo 已提交
6
Visit the active code via :
rictjo's avatar
package  
rictjo 已提交
7 8
https://github.com/richardtjornhammar/impetuous

rictjo's avatar
rictjo 已提交
9
Visit the published code : 
rictjo's avatar
rictjo 已提交
10
https://doi.org/10.5281/zenodo.2594690
rictjo's avatar
package  
rictjo 已提交
11

rictjo's avatar
rictjo 已提交
12
Cite using :
rictjo's avatar
rictjo 已提交
13
DOI: 10.5281/zenodo.2594690
rictjo's avatar
rictjo 已提交
14

rictjo's avatar
desc  
rictjo 已提交
15
# Pip installation with :
rictjo's avatar
rictjo 已提交
16
```
rictjo's avatar
rictjo 已提交
17
pip install impetuous-gfa
rictjo's avatar
rictjo 已提交
18
```
rictjo's avatar
desc  
rictjo 已提交
19 20 21 22 23 24 25 26 27

# Version controlled installation of the Impetuous library

The Impetuous library

In order to run these code snippets we recommend that you download the nix package manager. Nix package manager links from Oktober 2020:

https://nixos.org/download.html

rictjo's avatar
rictjo 已提交
28
```
rictjo's avatar
desc  
rictjo 已提交
29
$ curl -L https://nixos.org/nix/install | sh
rictjo's avatar
rictjo 已提交
30
```
rictjo's avatar
desc  
rictjo 已提交
31 32 33

If you cannot install it using your Wintendo then please consider installing Windows Subsystem for Linux first:

rictjo's avatar
rictjo 已提交
34
```
rictjo's avatar
desc  
rictjo 已提交
35
https://docs.microsoft.com/en-us/windows/wsl/install-win10
rictjo's avatar
rictjo 已提交
36
```
rictjo's avatar
desc  
rictjo 已提交
37 38 39 40 41 42 43

In order to run the code in this notebook you must enter a sensible working environment. Don't worry! We have created one for you. It's version controlled against python3.7 and you can get the file here:

https://github.com/richardtjornhammar/rixcfgs/blob/master/code/environments/impetuous-shell.nix

Since you have installed Nix as well as WSL, or use a Linux (NixOS) or bsd like system, you should be able to execute the following command in a termnial:

rictjo's avatar
rictjo 已提交
44
```
rictjo's avatar
desc  
rictjo 已提交
45
$ nix-shell impetuous-shell.nix
rictjo's avatar
rictjo 已提交
46
```
rictjo's avatar
desc  
rictjo 已提交
47 48 49

Now you should be able to start your jupyter notebook locally:

rictjo's avatar
rictjo 已提交
50
```
rictjo's avatar
desc  
rictjo 已提交
51
$ jupyter-notebook impetuous_finance.ipynb
rictjo's avatar
rictjo 已提交
52
```
rictjo's avatar
desc  
rictjo 已提交
53 54

and that's it.
rictjo's avatar
rictjo 已提交
55

rictjo's avatar
rictjo 已提交
56
# Usage example 1 : elaborate informatics
rictjo's avatar
rictjo 已提交
57 58 59 60

code: https://gitlab.com/stochasticdynamics/eplsmta-experiments
docs: https://arxiv.org/pdf/2001.06544.pdf

rictjo's avatar
rictjo 已提交
61
# Usage example 2 : simple regression code
rictjo's avatar
rictjo 已提交
62 63 64 65 66 67 68 69 70

Now while in a good environment: In your Jupyter notebook or just in a dedicated file.py you can write the following:

```
import pandas as pd
import numpy as np

import impetuous.quantification as impq

rictjo's avatar
saiga++  
rictjo 已提交
71 72
analyte_df = pd.read_csv( 'analytes.csv' , '\t' , index_col=0 )
journal_df = pd.read_csv( 'journal.csv'  , '\t' , index_col=0 )
rictjo's avatar
rictjo 已提交
73

rictjo's avatar
saiga++  
rictjo 已提交
74 75 76 77
formula = 'S ~ C(industry) : C(block) + C(industry) + C(block)'

res_dfs 	= impq.run_rpls_regression ( analyte_df , journal_df , formula , owner_by = 'angle' )
results_lookup	= impq.assign_quality_measures( journal_df , res_dfs , formula )
rictjo's avatar
rictjo 已提交
78

rictjo's avatar
rictjo 已提交
79
print ( results_lookup )
rictjo's avatar
rictjo 已提交
80 81 82
print ( res_dfs )
```

rictjo's avatar
rictjo 已提交
83 84
# Usage example 3 : Novel NLP sequence alignment

rictjo's avatar
ispn  
rictjo 已提交
85
Finding a word in a text is a simple and trivial problem in computer science. However matching a sequence of characters to a larger text segment is not. In this example you will be shown how to employ the impetuous text fitting procedure. The strength of the fit is conveyed via the returned score, higher being a stronger match between the two texts. This becomes costly for large texts and we thus break the text into segments and words. If there is a strong word to word match then the entire segment score is calculated. The off and main diagonal power terms refer to how to evaluate a string shift. Fortinbras and Faortinbraaks are probably the same word eventhough the latter has two character shifts in it. In this example both "requests" and "BeautifulSoup" are employed to parse internet text.
rictjo's avatar
rictjo 已提交
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136

```
import numpy as np
import pandas as pd

import impetuous.fit as impf    # THE IMPETUOUS FIT MODULE
                                # CONTAINS SCORE ALIGNMENT ROUTINE

import requests                 # FOR MAKING URL REQUESTS
from bs4 import BeautifulSoup   # FOR PARSING URL REQUEST CONTENT

if __name__ == '__main__' :

    print ( 'DOING TEXT SCORING VIA MY SEQUENCE ALIGNMENT ALGORITHM' )
    url_       = 'http://shakespeare.mit.edu/hamlet/full.html'

    response   = requests.get( url_ )
    bs_content = BeautifulSoup ( response.content , features="html.parser")

    name = 'fortinbras'
    score_co = 500
    S , S2 , N = 0 , 0 , 0
    for btext in bs_content.find_all('blockquote'):

        theTextSection = btext.get_text()
        theText        = theTextSection.split('\n')

        for segment in theText:
            pieces = segment.split(' ')
            if len(pieces)>1 :
                for piece in pieces :
                    if len(piece)>1 :
                        score = impf.score_alignment( [ name , piece ],
                                    main_diagonal_power = 3.5, shift_allowance=2,
                                    off_diagonal_power = [1.5,0.5] )
                        S    += score
                        S2   += score*score
                        N    += 1
                        if score > score_co :
                            print ( "" )
                            print ( score,name,piece )
                            print ( theTextSection )
                            print ( impf.score_alignment( [ name , theTextSection ],
                                        main_diagonal_power = 3.5, shift_allowance=2,
                                        off_diagonal_power = [1.5,0.5] ) )
                            print ( "" )

    print ( S/N )
    print ( S2/N-S*S/N/N )
```

rictjo's avatar
ex4 DM2  
rictjo 已提交
137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195
# Usage example 4 : Diabetes analysis

Here we show how to use a novel multifactor method on a diabetes data set to deduce important transcripts with respect to being diabetic. The data was obtained from the Broad Institute [Broad Insitute](http://www.gsea-msigdb.org/gsea/datasets.jsp) and contains gene expressions from a microarray hgu133a platform. We choose to employ the `Diabetes_collapsed_symbols.gct` file since it has already been collapsed down to useful transcripts. We have entered an `impetuous-gfa 0.50.0` environment and set up the a `diabetes.py` file with the follwing code content:

```
import pandas as pd
import numpy as np

if __name__ == '__main__' :
    analyte_df = pd.read_csv('../data/Diabetes_collapsed_symbols.gct','\t', index_col=0, header=2).iloc[:,1:]
```

In order to illustrate the use of low value supression we use the reducer module.  A `tanh` based soft max function is employed by the confred function to supress values lower than the mean of the entire sample series for each sample.
```
    from impetuous.reducer import get_procentile,confred
    for i_ in range(len(analyte_df.columns.values)):
        vals   = analyte_df.iloc[:,i_].values
        eta    = get_procentile( vals,50 )
        varpi  = get_procentile( vals,66 ) - get_procentile( vals,33 )
        analyte_df.iloc[:,i_] = confred(vals,eta,varpi)

    print ( analyte_df )
```

The data now contain samples along the columns and gene transcript symbols along the rows where the original values have been quenched with low value supression. The table have the following appearance

|NAME       |NGT_mm12_10591 | ... | DM2_mm81_10199 |
|:---       |           ---:|:---:|            ---:|
|215538_at  |    16.826041 | ... | 31.764484       |
|...        |              |     |                 |
|LDLR       |   19.261185  | ... | 30.004612       |

We proceed to write a journal data frame by adding the following lines to our code
```
    journal_df = pd.DataFrame([ v.split('_')[0] for v in analyte_df.columns] , columns=['Status'] , index = analyte_df.columns.values ).T
    print ( journal_df )
```
which will produce the following journal table :

|      |NGT_mm12_10591 | ... | DM2_mm81_10199 |
|:---    |         ---:|:---:|            ---:|
|Status  |         NGT | ... | DM2            |

Now we check if there are aggregation tendencies among these two groups prior to the multifactor analysis. We could use the hierarchical clustering algorithm, but refrain from it and instead use the `associations` method together with the `connectivity` clustering algorithm. The `associations` can be thought of as a type of ranked correlations similar to spearman correlations. If two samples are strongly associated with each other they will be close to `1` (or `-1` if they are anti associated). Since they are all humans, with many transcript features, the values will be close to `1`. After recasting the `associations` into distances we can determine if two samples are connected at a given distance by using the `connectivity` routine. All connected points are then grouped into technical clusters, or batches, and added to the journal.
```
    from impetuous.quantification import associations
    ranked_similarity_df = associations ( analyte_df .T )
    sample_distances = ( 1 - ranked_similarity_df ) * 2.

    from impetuous.clustering import connectivity
    cluster_ids = [ 'B'+str(c[0]) for c in connectivity( sample_distances.values , 5.0E-2 )[1] ]
    print ( cluster_ids )

    journal_df .loc['Batches'] = cluster_ids
```
which will produce a cluster list containing `13` batches with members whom are `Normal Glucose Tolerant` or have `Diabetes Mellitus 2`. We write down the formula for deducing which genes are best at recreating the diabetic state and batch identities by writing:
```
    formula = 'f~C(Status)+C(Batches)'
```
rictjo's avatar
upd++  
rictjo 已提交
196
The multifactor method calculates how to produce an encoded version of the journal data frame given an analyte data set. It does this by forming the psuedo inverse matrix that best describes the inverse of the analyte frame and then calculates the dot product of the inverse with the encoded journal data frame. This yields the coefficient frame needed to solve for the numerical encoding frame. The method has many nice statistical properties that we will not discuss further here. The first thing that the multifactor method does is to create the encoded data frame. The encoded data frame for this problem can be obtained with the following code snippet
rictjo's avatar
ex4 DM2  
rictjo 已提交
197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219
```
    encoded_df = interpret_problem ( analyte_df , journal_df , formula )
    print ( encoded_df )
```
and it will look something like this

|      |NGT_mm12_10591 | ... | DM2_mm81_10199 |
|:---  |           ---:|:---:|            ---:|
|B10   |         0.0   | ... | 0.0            |
|B5    |         0.0   | ... | 0.0            |
|B12   |         0.0   | ... | 1.0            |
|B2    |         0.0   | ... | 0.0            |
|B11   |         1.0   | ... | 0.0            |
|B8    |         0.0   | ... | 0.0            |
|B1    |         0.0   | ... | 0.0            |
|B7    |         0.0   | ... | 0.0            |
|B4    |         0.0   | ... | 0.0            |
|B0    |         0.0   | ... | 0.0            |
|B6    |         0.0   | ... | 0.0            |
|B9    |         0.0   | ... | 0.0            |
|B3    |         0.0   | ... | 0.0            |
|NGT   |         1.0   | ... | 0.0            |
|DM2   |         0.0   | ... | 1.0            |
rictjo's avatar
upd++  
rictjo 已提交
220

rictjo's avatar
ex4 DM2  
rictjo 已提交
221 222 223 224 225 226
This encoded dataframe can be used to calculate statistical parameters or solve other linear equations. Take the fast calculation of the mean gene expressions across all groups as an example
```
    print ( pd .DataFrame ( np.dot( encoded_df,analyte_df.T ) ,
                          columns = analyte_df .index ,
                          index   = encoded_df .index ) .apply ( lambda x:x/np.sum(encoded_df,1) ) )
```
rictjo's avatar
upd++  
rictjo 已提交
227
which will immediately calculate the mean values of all transcripts across all different groups.
rictjo's avatar
ex4 DM2  
rictjo 已提交
228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246

The `multifactor_evaluation` calculates the coefficients that best recreates the encoded journal by employing the psudo inverse of the analyte frame utlizing Singular Value Decomposition. The beta coefficients are then evaluated using a normal distribution assumption to obtain `p values` and rank corrected `q values` are also returned. The full function can be called with the follwing code
```
    from impetuous.quantification import multifactor_evaluation
    multifactor_results = multifactor_evaluation ( analyte_df , journal_df , formula )

    print ( multifactor_results.sort_values('DM2,q').iloc[:25,:].index.values  )
```
which tells us that the genes
```
['MYH2' 'RPL39' 'HBG1 /// HBG2' 'DDN' 'UBC' 'RPS18' 'ACTC' 'HBA2' 'GAPD'
 'ANKRD2' 'NEB' 'MYL2' 'MT1H' 'KPNA4' 'CA3' 'RPLP2' 'MRLC2 /// MRCL3'
 '211074_at' 'SLC25A16' 'KBTBD10' 'HSPA2' 'LDHB' 'COX7B' 'COX7A1' 'APOD']
```
have something to do with altered metabolism in Type 2 Diabetics. We could now proceed to use the hierarchical enrichment routine to understand what that something is.

This example was meant as an illustration of some of the codes implemented in the impetuous-gfa package.


rictjo's avatar
rictjo 已提交
247 248 249
# Manually updated code backups for this library :

GitLab:	https://gitlab.com/richardtjornhammar/impetuous
rictjo's avatar
edit  
rictjo 已提交
250

rictjo's avatar
rictjo 已提交
251 252
CSDN:	https://codechina.csdn.net/m0_52121311/impetuous

rictjo's avatar
ex4 DM2  
rictjo 已提交
253 254