提交 f1fda85d 编写于 作者: F feilong

fix init

上级 1790ff8c
.vscode
.idea
.DS_Store
__pycache__
*.pyc
*.zip
*.out
\ No newline at end of file
# skill_tree_ai
AI 技能树
\ No newline at end of file
AI 技能树
## 初始化技能树
技能树合成和id生成脚本目前用 Python 脚本统一处理
```bash
pip install -r requirement.txt
```
## 目录结构说明
data目录下包含 难度节点/章节点/知识节点 3级目录结构
* 技能树`骨架文件`
* 位置:`data/tree.json`
* 说明:该文件是执行 `python main.py` 生成的,请勿人工编辑
* 技能树`根节点`配置文件:
* 位置:`data/config.json`
* 说明:可编辑配置关键词等字段,其中 `node_id` 字段是生成的,请勿编辑
* 技能树`难度节点`
* 位置:`data/xxx`,例如: `data/1.AI初阶`
* 说明:
* 每个技能树有 3 个等级,目录前的序号是**必要**的,用来保持文件夹目录的顺序
* 每个目录下有一个 `config.json` 可配置关键词信息,其中 `node_id` 字段是生成的,请勿编辑
* 技能树`章节点`
* 位置:`data/xxx/xxx`,例如:`data/1.AI初阶/1.预备知识`
* 说明:
* 每个技能树的每个难度等级有 n 个章节,目录前的序号是**必要**的,用来保持文件夹目录的顺序
* 每个目录下有一个 `config.json` 可配置关键词信息,其中 `node_id` 字段是生成的,请勿编辑
* 技能树`知识节点`
* 位置:`data/xxx/xxx`,例如:`data/1.AI初阶/1.预备知识/1.AI简介`
* 说明:
* 每个技能树的每章有 n 个知识节点,目录前的序号是必要的,用来保持文件夹目录的顺序
* 每个目录下有一个 `config.json`
* 其中 `node_id` 字段是生成的,请勿编辑
* 其中 `keywords` 可配置关键字字段
* 其中 `children` 可配置该`知识节点`下的子树结构信息,参考后面描述
* 其中 `export` 可配置该`知识节点`下的导出习题信息,参考后面描述
## `知识节点` 子树信息结构
例如 `data/1.AI初阶/1.预备知识/1.AI简介/config.json` 里配置对该知识节点子树信息结构,用来增加技能树社区服务在该知识节点上的深度数据匹配:
```json
{
// ...
"children": [
{
"AI简史": {
"keywords": [
"AI起源",
"人工智能简史"
],
"children": []
}
}
],
}
```
## `知识节点` 的导出习题编辑
例如 `data/1.AI初阶/1.预备知识/1.AI简介/config.json` 里配置对该知识节点导出的习题
```json
{
// ...
"export": [
"helloworld.json",
// ...
]
}
```
`export` 字段中,我们列出习题定义的`json`文件列表 ,下面我们了解如何编写习题。
## `知识节点` 的导出习题选项配置编辑
目前我们支持使用 markdown 语法直接编辑习题和各选项。
如前文内容,我们在知识节点下增加习题 `helloworld`的定义文件,即在`data/1.AI初阶/1.预备知识/1.AI简介` 目录增加一个`helloworld.json`文件:
```json
{
"type": "code_options",
"author": "幻灰龙",
"source": "helloworld.md",
"notebook_enable": true
}
```
其中
* `type` 字段目前都固定是 `code_options`
* `notebook_enable` 字段决定这个习题是否生成对应的 `notebook`
* `source` 字段代表习题编辑的 `markdwon` 文件。
现在我们新建一个 `helloworld.md` 并编辑为:
````markdown
# Hello World
HelloWorld, 请阅读如下代码:
```python
import numpy as np
def test():
X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
y = np.dot(X, np.array([1, 2])) + 3
// TODO(选择选项中的代码填充此处)
y_predict = reg.predict(np.array([[3, 5]]))
print(y_predict)
if __name__ == '__main__':
test()
```
若将以下选项中的代码分别填充到上述代码中**TODO**处,哪个选项不是线性模型?
## template
```java
import numpy as np
def test():
X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
y = np.dot(X, np.array([1, 2])) + 3
// 下面的 code 占位符会被替换成答案和选项代码
$code
y_predict = reg.predict(np.array([[3, 5]]))
print(y_predict)
if __name__ == '__main__':
test()
```
## 答案
```python
from sklearn import svm
reg = svm.SVC(kernel='rbf').fit(X, y)
```
## 选项
### 使用 LinearRegression
```python
from sklearn.linear_model import LinearRegression
reg = LinearRegression().fit(X, y)
```
### 使用岭回归
```python
from sklearn.linear_model import Ridge
reg = Ridge(alpha=0.1)
```
### 使用拉索算法
```python
from sklearn.linear_model import Lasso
reg = Lasso(alpha=0.1).fit(X, y)
```
````
这是一个最基本的习题MarkDown结构,说明如下:
* 一级标题是`习题标题`
* 一级标题紧接着的段落是`习题描述`
* `## template` 是用于和答案、选项结合合成`NoteBook`代码用的模版
* `## 答案` 是习题选项中符合题目描述的答案项
* `## 选项` 下包含几个混淆用的选项
* 每个选项带有一个三级标题,例如`### 使用 LinearRegression`
* 最终生成的习题中不包含选项的三级标题,所以这个标题可以用来标注一些编辑信息
## 可选的习题源代码项目
编辑习题中,为了测试方便,可以直接在3级知识节点目录下创建对应的习题代码子目录
## 技能树合成
在根目录下执行 `python main.py` 会合成技能树文件,合成的技能树文件: `data/tree.json`
* 合成过程中,会自动检查每个目录下 `config.json` 里的 `node_id` 是否存在,不存在则生成
* 合成过程中,会自动检查每个知识点目录下 `config.json` 里的 `export` 里导出的习题配置,检查是否存在`exercise_id` 字段,如果不存在则生成
{
"node_id": "ai-3387d5d7a7684fbb9187e26d6d8d187b",
"keywords": [],
"children": [
{
"AI简史": {
"keywords": [
"AI起源",
"人工智能简史"
],
"children": []
}
}
],
"export": [
"helloworld.json"
]
}
\ No newline at end of file
{
"type": "code_options",
"author": "幻灰龙",
"source": "helloworld.md",
"notebook_enable": true
}
\ No newline at end of file
{
"node_id": "ai-861408a897f042fd8044bfc9838d2747",
"keywords": [],
"children": [],
"export": []
}
\ No newline at end of file
{
"node_id": "ai-8deab4930eef40b0bd9c2337e7ad5c51",
"keywords": [],
"children": [],
"export": []
}
\ No newline at end of file
{
"node_id": "ai-bc6f05e925e147fd8fca53041f70e022",
"keywords": []
}
\ No newline at end of file
{
"node_id": "ai-f51cf279b2c94e099da0f3e1fcfc793e",
"keywords": []
}
\ No newline at end of file
{
"node_id": "ai-d7c91624cb92446786eeaad0cd336445",
"keywords": []
}
\ No newline at end of file
{
"node_id": "ai-7c98592cf49347b69cc10b653731bd16",
"keywords": []
}
\ No newline at end of file
{
"node_id": "ai-8b462755b2014f90bff16ec87d2fb84c",
"keywords": []
}
\ No newline at end of file
{
"node_id": "ai-de60cc83f32541499c62e182ac952d83",
"keywords": []
}
\ No newline at end of file
{
"tree_name": "ai",
"keywords": [],
"node_id": "ai-e199f3e521db4347a8bc662f8f33ca6c"
}
\ No newline at end of file
{
"ai": {
"node_id": "ai-e199f3e521db4347a8bc662f8f33ca6c",
"keywords": [],
"children": [
{
"AI初阶": {
"node_id": "ai-7c98592cf49347b69cc10b653731bd16",
"keywords": [],
"children": [
{
"预备知识": {
"node_id": "ai-bc6f05e925e147fd8fca53041f70e022",
"keywords": [],
"children": [
{
"AI简介": {
"node_id": "ai-3387d5d7a7684fbb9187e26d6d8d187b",
"keywords": [],
"children": []
}
},
{
"线性反向传播": {
"node_id": "ai-861408a897f042fd8044bfc9838d2747",
"keywords": [],
"children": []
}
},
{
"梯度下降": {
"node_id": "ai-8deab4930eef40b0bd9c2337e7ad5c51",
"keywords": [],
"children": []
}
}
]
}
},
{
"线性回归": {
"node_id": "ai-f51cf279b2c94e099da0f3e1fcfc793e",
"keywords": [],
"children": []
}
},
{
"线性分类": {
"node_id": "ai-d7c91624cb92446786eeaad0cd336445",
"keywords": [],
"children": []
}
}
]
}
},
{
"AI中阶": {
"node_id": "ai-8b462755b2014f90bff16ec87d2fb84c",
"keywords": [],
"children": []
}
},
{
"AI高阶": {
"node_id": "ai-de60cc83f32541499c62e182ac952d83",
"keywords": [],
"children": []
}
}
]
}
}
\ No newline at end of file
# -*- coding: utf-8 -*-
from src.tree import TreeWalker
if __name__ == '__main__':
walker = TreeWalker("data", "ai", "ai")
walker.walk()
# -*- coding: utf-8 -*-
import logging
from genericpath import exists
import json
import os
import uuid
import sys
import re
id_set = set()
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
def load_json(p):
with open(p, 'r', encoding='utf-8') as f:
return json.loads(f.read())
def dump_json(p, j, exist_ok=False, override=False):
if os.path.exists(p):
if exist_ok:
if not override:
return
else:
logger.error(f"{p} already exist")
sys.exit(0)
with open(p, 'w+', encoding='utf-8') as f:
f.write(json.dumps(j, indent=2, ensure_ascii=False))
def ensure_config(path):
config_path = os.path.join(path, "config.json")
if not os.path.exists(config_path):
node = {"keywords": []}
dump_json(config_path, node, exist_ok=True, override=False)
return node
else:
return load_json(config_path)
def parse_no_name(d):
p = r'(\d+)\.(.*)'
m = re.search(p, d)
try:
no = int(m.group(1))
dir_name = m.group(2)
except:
sys.exit(0)
return no, dir_name
def check_export(base, cfg):
flag = False
exports = []
for export in cfg.get('export', []):
ecfg_path = os.path.join(base, export)
if os.path.exists(ecfg_path):
exports.append(export)
else:
flag = True
if flag:
cfg["export"] = exports
return flag
class TreeWalker:
def __init__(self, root, tree_name, title=None, log=None):
self.name = tree_name
self.root = root
self.title = tree_name if title is None else title
self.tree = {}
self.logger = logger if log is None else log
def walk(self):
root = self.load_root()
root_node = {
"node_id": root["node_id"],
"keywords": root["keywords"],
"children": []
}
self.tree[root["tree_name"]] = root_node
self.load_levels(root_node)
self.load_chapters(self.root, root_node)
for index, level in enumerate(root_node["children"]):
level_title = list(level.keys())[0]
level_node = list(level.values())[0]
level_path = os.path.join(self.root, f"{index+1}.{level_title}")
self.load_chapters(level_path, level_node)
for index, chapter in enumerate(level_node["children"]):
chapter_title = list(chapter.keys())[0]
chapter_node = list(chapter.values())[0]
chapter_path = os.path.join(
level_path, f"{index+1}.{chapter_title}")
self.load_sections(chapter_path, chapter_node)
for index, section_node in enumerate(chapter_node["children"]):
section_title = list(section_node.keys())[0]
full_path = os.path.join(
chapter_path, f"{index}.{section_title}")
if os.path.isdir(full_path):
self.ensure_exercises(full_path)
tree_path = os.path.join(self.root, "tree.json")
dump_json(tree_path, self.tree, exist_ok=True, override=True)
return self.tree
def load_levels(self, root_node):
levels = []
for level in os.listdir(self.root):
if not os.path.isdir(level):
continue
level_path = os.path.join(self.root, level)
num, config = self.load_level_node(level_path)
levels.append((num, config))
levels = self.resort_children(self.root, levels)
root_node["children"] = [item[1] for item in levels]
return root_node
def load_level_node(self, level_path):
config = self.ensure_level_config(level_path)
num, name = self.extract_node_env(level_path)
result = {
name: {
"node_id": config["node_id"],
"keywords": config["keywords"],
"children": [],
}
}
return num, result
def load_chapters(self, base, level_node):
chapters = []
for name in os.listdir(base):
full_name = os.path.join(base, name)
if os.path.isdir(full_name):
num, chapter = self.load_chapter_node(full_name)
chapters.append((num, chapter))
chapters = self.resort_children(base, chapters)
level_node["children"] = [item[1] for item in chapters]
return level_node
def load_sections(self, base, chapter_node):
sections = []
for name in os.listdir(base):
full_name = os.path.join(base, name)
if os.path.isdir(full_name):
num, section = self.load_section_node(full_name)
sections.append((num, section))
sections = self.resort_children(base, sections)
chapter_node["children"] = [item[1] for item in sections]
return chapter_node
def resort_children(self, base, children):
children.sort(key=lambda item: item[0])
for index, [number, element] in enumerate(children):
title = list(element.keys())[0]
origin = os.path.join(base, f"{number}.{title}")
posted = os.path.join(base, f"{index+1}.{title}")
if origin != posted:
self.logger.info(f"rename [{origin}] to [{posted}]")
os.rename(origin, posted)
return children
def ensure_chapters(self):
for subdir in os.listdir(self.root):
self.ensure_level_config(subdir)
def load_root(self):
config_path = os.path.join(self.root, "config.json")
if not os.path.exists(config_path):
config = {
"tree_name": self.name,
"keywords": [],
"node_id": self.gen_node_id(),
}
dump_json(config_path, config, exist_ok=True, override=True)
else:
config = load_json(config_path)
flag, result = self.ensure_node_id(config)
if flag:
dump_json(config_path, result, exist_ok=True, override=True)
return config
def ensure_level_config(self, path):
config_path = os.path.join(path, "config.json")
if not os.path.exists(config_path):
config = {
"node_id": self.gen_node_id()
}
dump_json(config_path, config, exist_ok=True, override=True)
else:
config = load_json(config_path)
flag, result = self.ensure_node_id(config)
if flag:
dump_json(config_path, config, exist_ok=True, override=True)
return config
def ensure_chapter_config(self, path):
config_path = os.path.join(path, "config.json")
if not os.path.exists(config_path):
config = {
"node_id": self.gen_node_id(),
"keywords": []
}
dump_json(config_path, config, exist_ok=True, override=True)
else:
config = load_json(config_path)
flag, result = self.ensure_node_id(config)
if flag:
dump_json(config_path, config, exist_ok=True, override=True)
return config
def ensure_section_config(self, path):
config_path = os.path.join(path, "config.json")
if not os.path.exists(config_path):
config = {
"node_id": self.gen_node_id(),
"keywords": [],
"children": [],
"export": []
}
dump_json(config_path, config, exist_ok=True, override=True)
else:
config = load_json(config_path)
flag, result = self.ensure_node_id(config)
if flag:
dump_json(config_path, config, exist_ok=True, override=True)
return config
def ensure_node_id(self, config):
if "node_id" not in config:
config["node_id"] = self.gen_node_id()
return True, config
else:
return False, config
def gen_node_id(self):
return f"{self.name}-{uuid.uuid4().hex}"
def extract_node_env(self, path):
try:
_, dir = os.path.split(path)
self.logger.info(path)
number, title = dir.split(".", 1)
return int(number), title
except Exception as error:
self.logger.error(f"目录 [{path}] 解析失败,结构不合法,可能是缺少序号")
sys.exit(1)
def load_chapter_node(self, full_name):
config = self.ensure_chapter_config(full_name)
num, name = self.extract_node_env(full_name)
result = {
name: {
"node_id": config["node_id"],
"keywords": config["keywords"],
"children": [],
}
}
return num, result
def load_section_node(self, full_name):
config = self.ensure_section_config(full_name)
num, name = self.extract_node_env(full_name)
result = {
name: {
"node_id": config["node_id"],
"keywords": config["keywords"],
"children": config.get("children", [])
}
}
# if "children" in config:
# result["children"] = config["children"]
return num, result
def ensure_exercises(self, section_path):
config = self.ensure_section_config(section_path)
for e in config.get("export", []):
full_name = os.path.join(section_path, e)
exercise = load_json(full_name)
if "exercise_id" not in exercise:
exercise["exercise_id"] = uuid.uuid4().hex
dump_json(full_name, exercise)
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册