Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
OpenDocCN
pycaret
提交
f4ff6547
pycaret
项目概览
OpenDocCN
/
pycaret
通知
2
Star
2
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
pycaret
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
未验证
提交
f4ff6547
编写于
7月 31, 2020
作者:
P
PyCaret
提交者:
GitHub
7月 31, 2020
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
Delete utils.py
上级
30bf4382
变更
1
隐藏空白更改
内联
并排
Showing
1 changed file
with
0 addition
and
128 deletion
+0
-128
build/lib/pycaret/utils.py
build/lib/pycaret/utils.py
+0
-128
未找到文件。
build/lib/pycaret/utils.py
已删除
100644 → 0
浏览文件 @
30bf4382
# Module: Utility
# Author: Moez Ali <moez.ali@queensu.ca>
# License: MIT
version_
=
"2.0"
def
version
():
print
(
version_
)
def
__version__
():
return
version_
def
check_metric
(
actual
,
prediction
,
metric
,
round
=
4
):
"""
Function to evaluate classification and regression metrics.
"""
#general dependencies
import
numpy
as
np
#metric calculation starts here
if
metric
==
'Accuracy'
:
from
sklearn
import
metrics
result
=
metrics
.
accuracy_score
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'Recall'
:
from
sklearn
import
metrics
result
=
metrics
.
recall_score
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'Precision'
:
from
sklearn
import
metrics
result
=
metrics
.
precision_score
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'F1'
:
from
sklearn
import
metrics
result
=
metrics
.
f1_score
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'Kappa'
:
from
sklearn
import
metrics
result
=
metrics
.
cohen_kappa_score
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'AUC'
:
from
sklearn
import
metrics
result
=
metrics
.
roc_auc_score
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'MCC'
:
from
sklearn
import
metrics
result
=
metrics
.
matthews_corrcoef
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'MAE'
:
from
sklearn
import
metrics
result
=
metrics
.
mean_absolute_error
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'MSE'
:
from
sklearn
import
metrics
result
=
metrics
.
mean_squared_error
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'RMSE'
:
from
sklearn
import
metrics
result
=
metrics
.
mean_squared_error
(
actual
,
prediction
)
result
=
np
.
sqrt
(
result
)
result
=
result
.
round
(
round
)
elif
metric
==
'R2'
:
from
sklearn
import
metrics
result
=
metrics
.
r2_score
(
actual
,
prediction
)
result
=
result
.
round
(
round
)
elif
metric
==
'RMSLE'
:
result
=
np
.
sqrt
(
np
.
mean
(
np
.
power
(
np
.
log
(
np
.
array
(
abs
(
prediction
))
+
1
)
-
np
.
log
(
np
.
array
(
abs
(
actual
))
+
1
),
2
)))
result
=
result
.
round
(
round
)
elif
metric
==
'MAPE'
:
mask
=
actual
!=
0
result
=
(
np
.
fabs
(
actual
-
prediction
)
/
actual
)[
mask
].
mean
()
result
=
result
.
round
(
round
)
return
result
def
enable_colab
():
"""
Function to render plotly visuals in colab.
"""
def
configure_plotly_browser_state
():
import
IPython
display
(
IPython
.
core
.
display
.
HTML
(
'''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
plotly: 'https://cdn.plot.ly/plotly-latest.min.js?noext',
},
});
</script>
'''
))
import
IPython
IPython
.
get_ipython
().
events
.
register
(
'pre_run_cell'
,
configure_plotly_browser_state
)
print
(
'Colab mode activated.'
)
\ No newline at end of file
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录