提交 261bfd00 编写于 作者: H Haojun Liao

Merge branch 'develop' into feature/query

......@@ -46,7 +46,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
</tr>
</table>
注意:与 JNI 方式不同,RESTful 接口是无状态的。在使用JDBC-RESTful时,需要在sql中指定表、超级表的数据库名称。例如:
注意:与 JNI 方式不同,RESTful 接口是无状态的。在使用JDBC-RESTful时,需要在sql中指定表、超级表的数据库名称。(从 TDengine 2.1.8.0 版本开始,也可以在 RESTful url 中指定当前 SQL 语句所使用的默认数据库名。)例如:
```sql
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(now, 24.6);
```
......
......@@ -654,22 +654,23 @@ conn.close()
为支持各种不同类型平台的开发,TDengine 提供符合 REST 设计标准的 API,即 RESTful API。为最大程度降低学习成本,不同于其他数据库 RESTful API 的设计方法,TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。RESTful 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)
注意:与标准连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。
注意:与标准连接器的一个区别是,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。(从 2.1.8.0 版本开始,支持在 RESTful url 中指定 db_name,这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 url 中指定的这个 db_name。)
### 安装
RESTful接口不依赖于任何TDengine的库,因此客户端不需要安装任何TDengine的库,只要客户端的开发语言支持HTTP协议即可。
RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安装任何 TDengine 的库,只要客户端的开发语言支持 HTTP 协议即可。
### 验证
在已经安装TDengine服务器端的情况下,可以按照如下方式进行验证。
在已经安装 TDengine 服务器端的情况下,可以按照如下方式进行验证。
下面以Ubuntu环境中使用curl工具(确认已经安装)来验证RESTful接口的正常。
下面以 Ubuntu 环境中使用 curl 工具(确认已经安装)来验证 RESTful 接口的正常。
下面示例是列出所有的数据库,请把h1.taosdata.com和6041(缺省值)替换为实际运行的TDengine服务fqdn和端口号:
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041(缺省值)替换为实际运行的 TDengine 服务 fqdn 和端口号:
```html
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
```json
{
......@@ -682,22 +683,23 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'show databases;' h1.taos
}
```
### RESTful连接器的使用
### RESTful 连接器的使用
#### HTTP请求格式
#### HTTP 请求格式
```
http://<fqdn>:<port>/rest/sql
http://<fqdn>:<port>/rest/sql/[db_name]
```
参数说明:
- fqnd: 集群中的任一台主机FQDN或IP地址
- port: 配置文件中httpPort配置项,缺省为6041
- fqnd: 集群中的任一台主机 FQDN 或 IP 地址
- port: 配置文件中 httpPort 配置项,缺省为 6041
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。(从 2.1.8.0 版本开始支持)
例如:http://h1.taos.com:6041/rest/sql 是指向地址为h1.taos.com:6041的url
例如:http://h1.taos.com:6041/rest/sql/test 是指向地址为 h1.taos.com:6041 的 url,并将默认使用的数据库库名设置为 test
HTTP请求的Header里需带有身份认证信息,TDengine支持Basic认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
HTTP 请求的 Header 里需带有身份认证信息,TDengine 支持 Basic 认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
- 自定义身份认证信息如下所示(<token>稍后介绍)
......@@ -711,25 +713,25 @@ Authorization: Taosd <TOKEN>
Authorization: Basic <TOKEN>
```
HTTP请求的BODY里就是一个完整的SQL语句,SQL语句中的数据表应提供数据库前缀,例如\<db-name>.\<tb-name>。如果表名不带数据库前缀,系统会返回错误。因为HTTP模块只是一个简单的转发,没有当前DB的概念。
HTTP 请求的 BODY 里就是一个完整的 SQL 语句,SQL 语句中的数据表应提供数据库前缀,例如 \<db_name>.\<tb_name>。如果表名不带数据库前缀,又没有在 url 中指定数据库名的话,系统会返回错误。因为 HTTP 模块只是一个简单的转发,没有当前 DB 的概念。
使用curl通过自定义身份认证方式来发起一个HTTP Request,语法如下:
使用 curl 通过自定义身份认证方式来发起一个 HTTP Request,语法如下:
```bash
curl -H 'Authorization: Basic <TOKEN>' -d '<SQL>' <ip>:<PORT>/rest/sql
curl -H 'Authorization: Basic <TOKEN>' -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
```
或者
```bash
curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql
curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql/[db_name]
```
其中,`TOKEN``{username}:{password}`经过Base64编码之后的字符串,例如`root:taosdata`编码后为`cm9vdDp0YW9zZGF0YQ==`
其中,`TOKEN``{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`
### HTTP返回格式
### HTTP 返回格式
返回值为JSON格式,如下:
返回值为 JSON 格式,如下:
```json
{
......@@ -747,9 +749,9 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql
说明:
- status: 告知操作结果是成功还是失败。
- head: 表的定义,如果不返回结果集,则仅有一列“affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- head: 表的定义,如果不返回结果集,则仅有一列 “affected_rows”。(从 2.0.17.0 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
- column_meta: 从 2.0.17.0 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有[[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有 [[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
- rows: 表明总共多少行数据。
column_meta 中的列类型说明:
......@@ -766,13 +768,13 @@ column_meta 中的列类型说明:
### 自定义授权码
HTTP请求中需要带有授权码`<TOKEN>`,用于身份识别。授权码通常由管理员提供,可简单的通过发送`HTTP GET`请求来获取授权码,操作如下:
HTTP 请求中需要带有授权码 `<TOKEN>`,用于身份识别。授权码通常由管理员提供,可简单的通过发送 `HTTP GET` 请求来获取授权码,操作如下:
```bash
curl http://<fqnd>:<port>/rest/login/<username>/<password>
```
其中,`fqdn`是TDengine数据库的fqdn或ip地址,port是TDengine服务的端口号,`username`为数据库用户名,`password`为数据库密码,返回值为`JSON`格式,各字段含义如下:
其中,`fqdn` 是 TDengine 数据库的 fqdn 或 ip 地址,port 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 `JSON` 格式,各字段含义如下:
- status:请求结果的标志位
......@@ -798,7 +800,7 @@ curl http://192.168.0.1:6041/rest/login/root/taosdata
### 使用示例
-demo库里查询表d1001的所有记录:
- demo 库里查询表 d1001 的所有记录:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sql
......@@ -818,7 +820,7 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
}
```
- 创建库demo:
- 创建库 demo:
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 192.168.0.1:6041/rest/sql
......@@ -837,9 +839,9 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 19
### 其他用法
#### 结果集采用Unix时间戳
#### 结果集采用 Unix 时间戳
HTTP请求URL采用`sqlt`时,返回结果集的时间戳将采用Unix时间戳格式表示,例如
HTTP 请求 URL 采用 `sqlt` 时,返回结果集的时间戳将采用 Unix 时间戳格式表示,例如
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001' 192.168.0.1:6041/rest/sqlt
......@@ -860,9 +862,9 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
}
```
#### 结果集采用UTC时间字符串
#### 结果集采用 UTC 时间字符串
HTTP请求URL采用`sqlutc`时,返回结果集的时间戳将采用UTC时间字符串表示,例如
HTTP 请求 URL 采用 `sqlutc` 时,返回结果集的时间戳将采用 UTC 时间字符串表示,例如
```bash
curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.t1' 192.168.0.1:6041/rest/sqlutc
```
......@@ -884,13 +886,14 @@ HTTP请求URL采用`sqlutc`时,返回结果集的时间戳将采用UTC时间
### 重要配置项
下面仅列出一些与RESTful接口有关的配置参数,其他系统参数请看配置文件里的说明。(注意:配置修改后,需要重启taosd服务才能生效)
下面仅列出一些与 RESTful 接口有关的配置参数,其他系统参数请看配置文件里的说明。(注意:配置修改后,需要重启 taosd 服务才能生效)
- 对外提供RESTful服务的端口号,默认绑定到 6041(实际取值是 serverPort + 11,因此可以通过修改 serverPort 参数的设置来修改)
- httpMaxThreads: 启动的线程数量,默认为2(2.0.17.0版本开始,默认值改为CPU核数的一半向下取整)
- restfulRowLimit: 返回结果集(JSON格式)的最大条数,默认值为10240
- httpEnableCompress: 是否支持压缩,默认不支持,目前TDengine仅支持gzip压缩格式
- httpDebugFlag: 日志开关,默认131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认131
- 对外提供 RESTful 服务的端口号,默认绑定到 6041(实际取值是 serverPort + 11,因此可以通过修改 serverPort 参数的设置来修改)。
- httpMaxThreads: 启动的线程数量,默认为 2(2.0.17.0 版本开始,默认值改为 CPU 核数的一半向下取整)。
- restfulRowLimit: 返回结果集(JSON 格式)的最大条数,默认值为 10240。
- httpEnableCompress: 是否支持压缩,默认不支持,目前 TDengine 仅支持 gzip 压缩格式。
- httpDebugFlag: 日志开关,默认 131。131:仅错误和报警信息,135:调试信息,143:非常详细的调试信息,默认 131。
- httpDbNameMandatory: 是否必须在 RESTful url 中指定默认的数据库名。默认为 0,即关闭此检查。如果设置为 1,那么每个 RESTful url 中都必须设置一个默认数据库名,否则无论此时执行的 SQL 语句是否需要指定数据库,都会返回一个执行错误,拒绝执行此 SQL 语句。
## <a class="anchor" id="csharp"></a>CSharp Connector
......
......@@ -14,7 +14,7 @@ TDengine的集群管理极其简单,除添加和删除节点需要人工干预
**第一步**:如果搭建集群的物理节点中,存有之前的测试数据、装过1.X的版本,或者装过其他版本的TDengine,请先将其删除,并清空所有数据(如果需要保留原有数据,请联系涛思交付团队进行旧版本升级、数据迁移),具体步骤请参考博客[《TDengine多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html)
**注意1:**因为FQDN的信息会写进文件,如果之前没有配置或者更改FQDN,且启动了TDengine。请一定在确保数据无用或者备份的前提下,清理一下之前的数据(`rm -rf /var/lib/taos/*`);
**注意2:**客户端也需要配置,确保它可以正确解析每个节点的FQDN配置,不管是通过DNS服务,还是 Host 文件。
**注意2:**客户端也需要配置,确保它可以正确解析每个节点的FQDN配置,不管是通过DNS服务,还是修改 hosts 文件。
**第二步**:建议关闭所有物理节点的防火墙,至少保证端口:6030 - 6042的TCP和UDP端口都是开放的。**强烈建议**先关闭防火墙,集群搭建完毕之后,再来配置端口;
......
......@@ -35,7 +35,7 @@ taos> DESCRIBE meters;
- 内部函数 now 是客户端的当前时间
- 插入记录时,如果时间戳为 now,插入数据时使用提交这条记录的客户端的当前时间
- Epoch Time:时间戳也可以是一个长整数,表示从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的毫秒数(相应地,如果所在 Database 的时间精度设置为“微秒”,则长整型格式的时间戳含义也就对应于从格林威治时间 1970-01-01 00:00:00.000 (UTC/GMT) 开始的微秒数;纳秒精度的逻辑也是类似的。)
- 时间可以加减,比如 now-2h,表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`,表示查询两周前整整一周的数据。在指定降操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n(自然月) 和 y(自然年)。
- 时间可以加减,比如 now-2h,表明查询时刻向前推 2 个小时(最近 2 小时)。数字后面的时间单位可以是 b(纳秒)、u(微秒)、a(毫秒)、s(秒)、m(分)、h(小时)、d(天)、w(周)。 比如 `select * from t1 where ts > now-2w and ts <= now-1w`,表示查询两周前整整一周的数据。在指定降采样操作(down sampling)的时间窗口(interval)时,时间单位还可以使用 n(自然月) 和 y(自然年)。
TDengine 缺省的时间戳是毫秒精度,但通过在 CREATE DATABASE 时传递的 PRECISION 参数就可以支持微秒和纳秒。(从 2.1.5.0 版本开始支持纳秒精度)
......@@ -722,6 +722,7 @@ Query OK, 1 row(s) in set (0.001091s)
1. <> 算子也可以写为 != ,请注意,这个算子不能用于数据表第一列的 timestamp 字段。
2. like 算子使用通配符字符串进行匹配检查。
* 在通配符字符串中:'%'(百分号)匹配 0 到任意个字符;'\_'(下划线)匹配单个任意字符。
* 如果希望匹配字符串中原本就带有的 \_(下划线)字符,那么可以在通配符字符串中写作 `\_`,也即加一个反斜线来进行转义。(从 2.1.8.0 版本开始支持)
* 通配符字符串最长不能超过 20 字节。(从 2.1.6.1 版本开始,通配符字符串的长度放宽到了 100 字节,并可以通过 taos.cfg 中的 maxWildCardsLength 参数来配置这一长度限制。但不建议使用太长的通配符字符串,将有可能严重影响 LIKE 操作的执行性能。)
3. 同时进行多个字段的范围过滤,需要使用关键词 AND 来连接不同的查询条件,暂不支持 OR 连接的不同列之间的查询过滤条件。
4. 针对单一字段的过滤,如果是时间过滤条件,则一条语句中只支持设定一个;但针对其他的(普通)列或标签列,则可以使用 `OR` 关键字进行组合条件的查询过滤。例如: `((value > 20 AND value < 30) OR (value < 12))`。
......@@ -1220,27 +1221,36 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
```
功能说明:返回表/超级表的指定时间截面、指定字段的记录。
返回结果数据类型:同应用的字段
返回结果数据类型:同字段类型
应用字段:所有字段。
应用字段:数值型字段。
适用于:**表、超级表**。
说明:(从 2.0.15.0 版本开始新增此函数)INTERP 必须指定时间断面,如果该时间断面不存在直接对应的数据,那么会根据 FILL 参数的设定进行插值。其中,条件语句里面可以附带更多的筛选条件,例如标签、tbname。
说明:(从 2.0.15.0 版本开始新增此函数)INTERP 必须指定时间断面,如果该时间断面不存在直接对应的数据,那么会根据 FILL 参数的设定进行插值。此外,条件语句里面可附带筛选条件,例如标签、tbname。
INTERP 查询要求查询的时间区间必须位于数据集合(表)的所有记录的时间范围之内。如果给定的时间戳位于时间范围之外,即使有插值指令,仍然不返回结果。
示例:
```mysql
taos> select interp(*) from meters where ts='2017-7-14 10:42:00.005' fill(prev);
interp(ts) | interp(f1) | interp(f2) | interp(f3) |
====================================================================
2017-07-14 10:42:00.005 | 5 | 9 | 6 |
Query OK, 1 row(s) in set (0.002912s)
```sql
taos> SELECT INTERP(*) FROM meters WHERE ts='2017-7-14 18:40:00.004';
interp(ts) | interp(current) | interp(voltage) | interp(phase) |
==========================================================================================
2017-07-14 18:40:00.004 | 9.84020 | 216 | 0.32222 |
Query OK, 1 row(s) in set (0.002652s)
```
如果给定的时间戳无对应的数据,在不指定插值生成策略的情况下,不会返回结果,如果指定了插值策略,会根据插值策略返回结果。
```sql
taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005';
Query OK, 0 row(s) in set (0.004022s)
taos> select interp(*) from meters where tbname in ('t1') and ts='2017-7-14 10:42:00.005' fill(prev);
interp(ts) | interp(f1) | interp(f2) | interp(f3) |
====================================================================
2017-07-14 10:42:00.005 | 5 | 6 | 7 |
Query OK, 1 row(s) in set (0.002005s)
taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005' FILL(PREV);;
interp(ts) | interp(current) | interp(voltage) | interp(phase) |
==========================================================================================
2017-07-14 18:40:00.005 | 9.88150 | 217 | 0.32500 |
Query OK, 1 row(s) in set (0.003056s)
```
### 计算函数
......
......@@ -6,7 +6,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
* [TDengine Introduction and Features](/evaluation#intro)
* [TDengine Use Scenes](/evaluation#scenes)
* [TDengine Performance Metrics and Verification]((/evaluation#))
* [TDengine Performance Metrics and Verification](/evaluation#)
## [Getting Started](/getting-started)
......
# Quickly experience TDengine through Docker
While it is not recommended to deploy TDengine services via Docker in a production environment, Docker tools do a good job of shielding the environmental differences in the underlying operating system and are well suited for use in development testing or first-time experience with the toolset for installing and running TDengine. In particular, Docker makes it relatively easy to try TDengine on Mac OSX and Windows systems without having to install a virtual machine or rent an additional Linux server. In addition, starting from version 2.0.14.0, TDengine provides images that support both X86-64, X86, arm64, and arm32 platforms, so non-mainstream computers that can run docker, such as NAS, Raspberry Pi, and embedded development boards, can also easily experience TDengine based on this document.
The following article explains how to quickly build a single-node TDengine runtime environment via Docker to support development and testing through a Step by Step style introduction.
## Docker download
The Docker tools themselves can be downloaded from [Docker official site](https://docs.docker.com/get-docker/).
After installation, you can check the Docker version in the command line terminal. If the version number is output properly, the Docker environment has been installed successfully.
```bash
$ docker -v
Docker version 20.10.3, build 48d30b5
```
## Running TDengine in a Docker container
1, Use the command to pull the TDengine image and make it run in the background.
```bash
$ docker run -d --name tdengine tdengine/tdengine
7760c955f225d72e9c1ec5a4cef66149a7b94dae7598b11eb392138877e7d292
```
- **docker run**: Running a container via Docker
- **--name tdengine**: Set the container name, we can see the corresponding container by the container name
- **-d**: Keeping containers running in the background
- **tdengine/tdengine**: Pulled from the official TDengine published application image
- **7760c955f225d72e9c1ec5a4cef66149a7b94dae7598b11eb392138877e7d292**: The long character returned is the container ID, and we can also view the corresponding container by its container ID
2, Verify that the container is running correctly.
```bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
```
- **docker ps**: Lists information about all containers that are in running state.
- **CONTAINER ID**: Container ID.
- **IMAGE**: The mirror used.
- **COMMAND**: The command to run when starting the container.
- **CREATED**: The time when the container was created.
- **STATUS**: The container status. Up means running.
3, Go inside the Docker container and use TDengine.
```bash
$ docker exec -it tdengine /bin/bash
root@c452519b0f9b:~/TDengine-server-2.0.20.13#
```
- **docker exec**: Enter the container via the docker exec command; if you exit, the container will not stop.
- **-i**: Enter the interactive mode.
- **-t**: Specify a terminal.
- **c452519b0f9b**: The container ID, which needs to be modified according to the value returned by the docker ps command.
- **/bin/bash**: Load the container and run bash to interact with it.
4, After entering the container, execute the taos shell client program.
```bash
$ root@c452519b0f9b:~/TDengine-server-2.0.20.13# taos
Welcome to the TDengine shell from Linux, Client Version:2.0.20.13
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
```
The TDengine terminal successfully connects to the server and prints out a welcome message and version information. If it fails, an error message is printed.
In the TDengine terminal, you can create/delete databases, tables, super tables, etc., and perform insert and query operations via SQL commands. For details, you can refer to [TAOS SQL guide](https://www.taosdata.com/en/documentation/taos-sql).
## Learn more about TDengine with taosdemo
1, Following the above steps, exit the TDengine terminal program first.
```bash
$ taos> q
root@c452519b0f9b:~/TDengine-server-2.0.20.13#
```
2, Execute taosdemo from the command line interface.
```bash
root@c452519b0f9b:~/TDengine-server-2.0.20.13# taosdemo
taosdemo is simulating data generated by power equipments monitoring...
host: 127.0.0.1:6030
user: root
password: taosdata
configDir:
resultFile: ./output.txt
thread num of insert data: 10
thread num of create table: 10
top insert interval: 0
number of records per req: 30000
max sql length: 1048576
database count: 1
database[0]:
database[0] name: test
drop: yes
replica: 1
precision: ms
super table count: 1
super table[0]:
stbName: meters
autoCreateTable: no
childTblExists: no
childTblCount: 10000
childTblPrefix: d
dataSource: rand
iface: taosc
insertRows: 10000
interlaceRows: 0
disorderRange: 1000
disorderRatio: 0
maxSqlLen: 1048576
timeStampStep: 1
startTimestamp: 2017-07-14 10:40:00.000
sampleFormat:
sampleFile:
tagsFile:
columnCount: 3
column[0]:FLOAT column[1]:INT column[2]:FLOAT
tagCount: 2
tag[0]:INT tag[1]:BINARY(16)
Press enter key to continue or Ctrl-C to stop
```
After enter, this command will automatically create a super table meters under the database test, there are 10,000 tables under this super table, the table name is "d0" to "d9999", each table has 10,000 records, each record has four fields (ts, current, voltage, phase), the time stamp is from "2017-07-14 10:40:00 000" to "2017-07-14 10:40:09 999", each table has a tag location and groupId, groupId is set from 1 to 10 and location is set to "beijing" or "shanghai".
It takes about a few minutes to execute this command and ends up inserting a total of 100 million records.
3, Go to the TDengine terminal and view the data generated by taosdemo.
- **Go to the terminal interface.**
```bash
$ root@c452519b0f9b:~/TDengine-server-2.0.20.13# taos
Welcome to the TDengine shell from Linux, Client Version:2.0.20.13
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
taos>
```
- **View the database.**
```bash
$ taos> show databases;
name | created_time | ntables | vgroups | ···
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
```
- **View Super Tables.**
```bash
$ taos> use test;
Database changed.
$ taos> show stables;
name | created_time | columns | tags | tables |
============================================================================================
meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
Query OK, 1 row(s) in set (0.003259s)
```
- **View the table and limit the output to 10 entries.**
```bash
$ taos> select * from test.t0 limit 10;
DB error: Table does not exist (0.002857s)
taos> select * from test.d0 limit 10;
ts | current | voltage | phase |
======================================================================================
2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
Query OK, 10 row(s) in set (0.016791s)
```
- **View the tag values for the d0 table.**
```bash
$ taos> select groupid, location from test.d0;
groupid | location |
=================================
0 | shanghai |
Query OK, 1 row(s) in set (0.003490s)
```
## Stop the TDengine service that is running in Docker
```bash
$ docker stop tdengine
tdengine
```
- **docker stop**: Stop the specified running docker image with docker stop.
- **tdengine**: The name of the container.
## TDengine connected in Docker during programming development
There are two ideas for connecting from outside of Docker to use TDengine services running inside a Docker container:
1, By port mapping (-p), the open network port inside the container is mapped to the specified port of the host. By mounting the local directory (-v), you can synchronize the data inside the host and the container to prevent data loss after the container is deleted.
```bash
$ docker run -d -v /etc/taos:/etc/taos -P 6041:6041 tdengine/tdengine
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
```
- The first command starts a docker container with TDengine running and maps the 6041 port of the container to port 6041 of the host.
- The second command, accessing TDengine through the RESTful interface, connects to port 6041 on the local machine, so the connection is successful.
Note: In this example, for convenience reasons, only port 6041 is mapped, which is required for RESTful. If you wish to connect to the TDengine service in a non-RESTful manner, you will need to map a total of 11 ports starting at 6030. In the example, mounting the local directory also only deals with the /etc/taos directory where the configuration files are located, but not the data storage directory.
2, Go directly to the docker container to do development via the exec command. That is, put the program code in the same Docker container where the TDengine server is located and connect to the TDengine service local to the container.
```bash
$ docker exec -it tdengine /bin/bash
```
......@@ -2,7 +2,7 @@
## <a class="anchor" id="install"></a>Quick Install
TDengine software consists of 3 parts: server, client, and alarm module. At the moment, TDengine server only runs on Linux (Windows, mac OS and more OS supports will come soon), but client can run on either Windows or Linux. TDengine client can be installed and run on Windows or Linux. Applications based-on any OSes can all connect to server taosd via a RESTful interface. About CPU, TDengine supports X64/ARM64/MIPS64/Alpha64, and ARM32、RISC-V, other more CPU architectures will be supported soon. You can set up and install TDengine server either from the [source code](https://www.taosdata.com/en/getting-started/#Install-from-Source) or the [packages](https://www.taosdata.com/en/getting-started/#Install-from-Package).
TDengine software consists of 3 components: server, client, and alarm module. At the moment, TDengine server only runs on Linux (Windows, mac OS and more OS supports will come soon), but client can run on either Windows or Linux. TDengine client can be installed and run on Windows or Linux. Applications based-on any OSes can all connect to server taosd via a RESTful interface. About CPU, TDengine supports X64/ARM64/MIPS64/Alpha64, and ARM32、RISC-V, other more CPU architectures will be supported soon. You can set up and install TDengine server either from the [source code](https://www.taosdata.com/en/getting-started/#Install-from-Source) or the [packages](https://www.taosdata.com/en/getting-started/#Install-from-Package).
### <a class="anchor" id="source-install"></a>Install from Source
......@@ -10,7 +10,9 @@ Please visit our [TDengine github page](https://github.com/taosdata/TDengine) fo
### Install from Docker Container
Please visit our [TDengine Official Docker Image: Distribution, Downloading, and Usage](https://www.taosdata.com/blog/2020/05/13/1509.html).
For the time being, it is not recommended to use Docker to deploy the client or server side of TDengine in production environments, but it is convenient to use Docker to deploy in development environments or when trying it for the first time. In particular, with Docker, it is easy to try TDengine in Mac OS X and Windows environments.
Please refer to the detailed operation in [Quickly experience TDengine through Docker](https://www.taosdata.com/en/documentation/getting-started/docker).
### <a class="anchor" id="package-install"></a>Install from Package
......
......@@ -119,7 +119,7 @@ As the data points are a series of data points over time, the data points genera
9. in addition to storage and query operations, various statistical and real-time calculation operations are also required;
10. data volume is huge, a system may generate over 10 billion data points in a day.
By utilizing the above characteristics, TDengine designs the storage and computing engine in a special and optimized way for time-series data, resulting in massive improvements in system efficiency.
In light of the characteristics mentioned above, TDengine designs the storage and computing engine in a special and optimized way for time-series data, resulting in massive improvements in system efficiency.
### Relational Database Model
......@@ -139,7 +139,7 @@ TDengine suggests using collection point ID as the table name (like D1001 in the
### STable: A Collection of Data Points in the Same Type
The method of one table for each point will bring a greatly increasing number of tables, which is difficult to manage. Moreover, applications often need to take aggregation operations between collection points, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the [STable(Super Table)](https://www.taosdata.com/en/documentation/super-table) concept is introduced by TDengine.
The method of one table for each point will bring a greatly increasing number of tables, which is difficult to manage. Moreover, applications often need to take aggregation operations between collection points, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable (Super Table) concept is introduced by TDengine.
STable is an abstract collection for a type of data point. A STable contains a set of points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable (a combination of data collection points of a specific type), in addition to defining the table structure of the collected metrics, it is also necessary to define the schema of its tag. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
......
......@@ -26,6 +26,7 @@ Replace the database operating in the current connection with “power”, other
- Any table or STable belongs to a database. Before creating a table, a database must be created first.
- Tables in two different databases cannot be JOIN.
- You need to specify a timestamp when creating and inserting records, or querying history records.
## <a class="anchor" id="create-stable"></a> Create a STable
......
......@@ -16,7 +16,7 @@ Please refer to the [video tutorial](https://www.taosdata.com/blog/2020/11/11/19
**Note 1:** Because the information of FQDN will be written into a file, if FQDN has not been configured or changed before, and TDengine has been started, be sure to clean up the previous data (`rm -rf /var/lib/taos/*`)on the premise of ensuring that the data is useless or backed up;
**Note 2:** The client also needs to be configured to ensure that it can correctly parse the FQDN configuration of each node, whether through DNS service or Host file.
**Note 2:** The client also needs to be configured to ensure that it can correctly parse the FQDN configuration of each node, whether through DNS service or modify hosts file.
**Step 2:** It is recommended to close the firewall of all physical nodes, and at least ensure that the TCP and UDP ports of ports 6030-6042 are open. It is **strongly recommended** to close the firewall first and configure the ports after the cluster is built;
......
......@@ -194,6 +194,9 @@ keepColumnName 1
# maximum number of rows returned by the restful interface
# restfulRowLimit 10240
# database name must be specified in restful interface if the following parameter is set, off by default
# httpDbNameMandatory 1
# The following parameter is used to limit the maximum number of lines in log files.
# max number of lines per log filters
# numOfLogLines 10000000
......
......@@ -19,18 +19,21 @@ else
fi
# Dynamic directory
data_dir="/var/lib/taos"
if [ "$osType" != "Darwin" ]; then
data_dir="/var/lib/taos"
log_dir="/var/log/taos"
else
log_dir=~/TDengine/log
data_dir="/usr/local/var/lib/taos"
log_dir="/usr/local/var/log/taos"
fi
data_link_dir="/usr/local/taos/data"
log_link_dir="/usr/local/taos/log"
cfg_install_dir="/etc/taos"
if [ "$osType" != "Darwin" ]; then
cfg_install_dir="/etc/taos"
else
cfg_install_dir="/usr/local/etc/taos"
fi
if [ "$osType" != "Darwin" ]; then
bin_link_dir="/usr/bin"
......@@ -44,10 +47,18 @@ else
fi
#install main path
install_main_dir="/usr/local/taos"
if [ "$osType" != "Darwin" ]; then
install_main_dir="/usr/local/taos"
else
install_main_dir="/usr/local/Cellar/tdengine/${verNumber}"
fi
# old bin dir
bin_dir="/usr/local/taos/bin"
if [ "$osType" != "Darwin" ]; then
bin_dir="/usr/local/taos/bin"
else
bin_dir="/usr/local/Cellar/tdengine/${verNumber}/bin"
fi
service_config_dir="/etc/systemd/system"
......@@ -59,12 +70,11 @@ GREEN_UNDERLINE='\033[4;32m'
NC='\033[0m'
csudo=""
if command -v sudo > /dev/null; then
csudo="sudo"
fi
if [ "$osType" != "Darwin" ]; then
if command -v sudo > /dev/null; then
csudo="sudo"
fi
initd_mod=0
service_mod=2
if pidof systemd &> /dev/null; then
......@@ -137,18 +147,17 @@ function install_main_path() {
function install_bin() {
# Remove links
${csudo} rm -f ${bin_link_dir}/taos || :
${csudo} rm -f ${bin_link_dir}/taos || :
${csudo} rm -f ${bin_link_dir}/taosd || :
${csudo} rm -f ${bin_link_dir}/taosdemo || :
${csudo} rm -f ${bin_link_dir}/taosdump || :
if [ "$osType" != "Darwin" ]; then
${csudo} rm -f ${bin_link_dir}/taosd || :
${csudo} rm -f ${bin_link_dir}/taosdemo || :
${csudo} rm -f ${bin_link_dir}/perfMonitor || :
${csudo} rm -f ${bin_link_dir}/taosdump || :
${csudo} rm -f ${bin_link_dir}/set_core || :
${csudo} rm -f ${bin_link_dir}/rmtaos || :
fi
${csudo} rm -f ${bin_link_dir}/rmtaos || :
${csudo} cp -r ${binary_dir}/build/bin/* ${install_main_dir}/bin
${csudo} cp -r ${script_dir}/taosd-dump-cfg.gdb ${install_main_dir}/bin
......@@ -162,20 +171,18 @@ function install_bin() {
${csudo} chmod 0555 ${install_main_dir}/bin/*
#Make link
[ -x ${install_main_dir}/bin/taos ] && ${csudo} ln -s ${install_main_dir}/bin/taos ${bin_link_dir}/taos || :
[ -x ${install_main_dir}/bin/taos ] && ${csudo} ln -s ${install_main_dir}/bin/taos ${bin_link_dir}/taos || :
[ -x ${install_main_dir}/bin/taosd ] && ${csudo} ln -s ${install_main_dir}/bin/taosd ${bin_link_dir}/taosd || :
[ -x ${install_main_dir}/bin/taosdump ] && ${csudo} ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || :
[ -x ${install_main_dir}/bin/taosdemo ] && ${csudo} ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || :
if [ "$osType" != "Darwin" ]; then
[ -x ${install_main_dir}/bin/taosd ] && ${csudo} ln -s ${install_main_dir}/bin/taosd ${bin_link_dir}/taosd || :
[ -x ${install_main_dir}/bin/taosdump ] && ${csudo} ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || :
[ -x ${install_main_dir}/bin/taosdemo ] && ${csudo} ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || :
[ -x ${install_main_dir}/bin/perfMonitor ] && ${csudo} ln -s ${install_main_dir}/bin/perfMonitor ${bin_link_dir}/perfMonitor || :
[ -x ${install_main_dir}/set_core.sh ] && ${csudo} ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
fi
if [ "$osType" != "Darwin" ]; then
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/rmtaos || :
else
[ -x ${install_main_dir}/bin/remove_client.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove_client.sh ${bin_link_dir}/rmtaos || :
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo} ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/rmtaos || :
fi
}
......@@ -222,7 +229,7 @@ function install_jemalloc() {
fi
if [ -d /etc/ld.so.conf.d ]; then
${csudo} echo "/usr/local/lib" > /etc/ld.so.conf.d/jemalloc.conf
echo "/usr/local/lib" | ${csudo} tee /etc/ld.so.conf.d/jemalloc.conf
${csudo} ldconfig
else
echo "/etc/ld.so.conf.d not found!"
......@@ -248,10 +255,8 @@ function install_lib() {
fi
else
${csudo} cp -Rf ${binary_dir}/build/lib/libtaos.* ${install_main_dir}/driver && ${csudo} chmod 777 ${install_main_dir}/driver/*
${csudo} ln -sf ${install_main_dir}/driver/libtaos.1.dylib ${lib_link_dir}/libtaos.1.dylib
${csudo} ln -sf ${lib_link_dir}/libtaos.1.dylib ${lib_link_dir}/libtaos.dylib
fi
install_jemalloc
if [ "$osType" != "Darwin" ]; then
......@@ -261,10 +266,14 @@ function install_lib() {
function install_header() {
${csudo} rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taoserror.h || :
if [ "$osType" != "Darwin" ]; then
${csudo} rm -f ${inc_link_dir}/taos.h ${inc_link_dir}/taoserror.h || :
fi
${csudo} cp -f ${source_dir}/src/inc/taos.h ${source_dir}/src/inc/taoserror.h ${install_main_dir}/include && ${csudo} chmod 644 ${install_main_dir}/include/*
${csudo} ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
${csudo} ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
if [ "$osType" != "Darwin" ]; then
${csudo} ln -s ${install_main_dir}/include/taos.h ${inc_link_dir}/taos.h
${csudo} ln -s ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h
fi
}
function install_config() {
......@@ -272,23 +281,20 @@ function install_config() {
if [ ! -f ${cfg_install_dir}/taos.cfg ]; then
${csudo} mkdir -p ${cfg_install_dir}
[ -f ${script_dir}/../cfg/taos.cfg ] && ${csudo} cp ${script_dir}/../cfg/taos.cfg ${cfg_install_dir}
[ -f ${script_dir}/../cfg/taos.cfg ] &&
${csudo} cp ${script_dir}/../cfg/taos.cfg ${cfg_install_dir}
${csudo} chmod 644 ${cfg_install_dir}/*
fi
${csudo} cp -f ${script_dir}/../cfg/taos.cfg ${install_main_dir}/cfg/taos.cfg.org
${csudo} ln -s ${cfg_install_dir}/taos.cfg ${install_main_dir}/cfg
if [ "$osType" != "Darwin" ]; then ${csudo} ln -s ${cfg_install_dir}/taos.cfg ${install_main_dir}/cfg
fi
}
function install_log() {
${csudo} rm -rf ${log_dir} || :
if [ "$osType" != "Darwin" ]; then
${csudo} mkdir -p ${log_dir} && ${csudo} chmod 777 ${log_dir}
else
mkdir -p ${log_dir} && chmod 777 ${log_dir}
fi
${csudo} mkdir -p ${log_dir} && ${csudo} chmod 777 ${log_dir}
${csudo} ln -s ${log_dir} ${install_main_dir}/log
}
......@@ -309,7 +315,6 @@ function install_connector() {
echo "WARNING: go connector not found, please check if want to use it!"
fi
${csudo} cp -rf ${source_dir}/src/connector/python ${install_main_dir}/connector
${csudo} cp ${binary_dir}/build/lib/*.jar ${install_main_dir}/connector &> /dev/null && ${csudo} chmod 777 ${install_main_dir}/connector/*.jar || echo &> /dev/null
}
......@@ -489,24 +494,21 @@ function install_TDengine() {
else
echo -e "${GREEN}Start to install TDEngine Client ...${NC}"
fi
install_main_path
if [ "$osType" != "Darwin" ]; then
install_data
fi
install_data
install_log
install_header
install_lib
install_connector
install_examples
install_bin
if [ "$osType" != "Darwin" ]; then
install_service
fi
install_config
if [ "$osType" != "Darwin" ]; then
......
......@@ -365,6 +365,8 @@ STblCond* tsGetTableFilter(SArray* filters, uint64_t uid, int16_t idx);
void tscRemoveCachedTableMeta(STableMetaInfo* pTableMetaInfo, uint64_t id);
char* cloneCurrentDBName(SSqlObj* pSql);
#ifdef __cplusplus
}
#endif
......
此差异已折叠。
......@@ -371,12 +371,15 @@ static int32_t applySchemaAction(TAOS* taos, SSchemaAction* action, SSmlLinesInf
buildColumnDescription(action->alterSTable.field, result+n, capacity-n, &outBytes);
TAOS_RES* res = taos_query(taos, result); //TODO async doAsyncQuery
code = taos_errno(res);
char* errStr = taos_errstr(res);
char* begin = strstr(errStr, "duplicated column names");
bool tscDupColNames = (begin != NULL);
if (code != TSDB_CODE_SUCCESS) {
tscError("SML:0x%"PRIx64" apply schema action. error: %s", info->id, taos_errstr(res));
tscError("SML:0x%"PRIx64" apply schema action. error: %s", info->id, errStr);
}
taos_free_result(res);
if (code == TSDB_CODE_MND_FIELD_ALREAY_EXIST || code == TSDB_CODE_TSC_DUP_COL_NAMES) {
if (code == TSDB_CODE_MND_FIELD_ALREAY_EXIST || code == TSDB_CODE_MND_TAG_ALREAY_EXIST || tscDupColNames) {
TAOS_RES* res2 = taos_query(taos, "RESET QUERY CACHE");
code = taos_errno(res2);
if (code != TSDB_CODE_SUCCESS) {
......@@ -392,12 +395,15 @@ static int32_t applySchemaAction(TAOS* taos, SSchemaAction* action, SSmlLinesInf
result+n, capacity-n, &outBytes);
TAOS_RES* res = taos_query(taos, result); //TODO async doAsyncQuery
code = taos_errno(res);
char* errStr = taos_errstr(res);
char* begin = strstr(errStr, "duplicated column names");
bool tscDupColNames = (begin != NULL);
if (code != TSDB_CODE_SUCCESS) {
tscError("SML:0x%"PRIx64" apply schema action. error : %s", info->id, taos_errstr(res));
}
taos_free_result(res);
if (code == TSDB_CODE_MND_TAG_ALREAY_EXIST || code == TSDB_CODE_TSC_DUP_COL_NAMES) {
if (code == TSDB_CODE_MND_TAG_ALREAY_EXIST || code == TSDB_CODE_MND_FIELD_ALREAY_EXIST || tscDupColNames) {
TAOS_RES* res2 = taos_query(taos, "RESET QUERY CACHE");
code = taos_errno(res2);
if (code != TSDB_CODE_SUCCESS) {
......
......@@ -40,7 +40,6 @@
#include "qScript.h"
#include "ttype.h"
#include "qFilter.h"
#include "httpInt.h"
#define DEFAULT_PRIMARY_TIMESTAMP_COL_NAME "_c0"
......@@ -72,7 +71,6 @@ static int convertTimestampStrToInt64(tVariant *pVar, int32_t precision);
static bool serializeExprListToVariant(SArray* pList, tVariant **dst, int16_t colType, uint8_t precision);
static bool has(SArray* pFieldList, int32_t startIdx, const char* name);
static char* cloneCurrentDBName(SSqlObj* pSql);
static int32_t getDelimiterIndex(SStrToken* pTableName);
static bool validateTableColumnInfo(SArray* pFieldList, SSqlCmd* pCmd);
static bool validateTagParams(SArray* pTagsList, SArray* pFieldList, SSqlCmd* pCmd);
......@@ -432,7 +430,7 @@ int32_t readFromFile(char *name, uint32_t *len, void **buf) {
int32_t handleUserDefinedFunc(SSqlObj* pSql, struct SSqlInfo* pInfo) {
const char *msg1 = "function name is too long";
const char *msg1 = "invalidate function name";
const char *msg2 = "path is too long";
const char *msg3 = "invalid outputtype";
const char *msg4 = "invalid script";
......@@ -449,7 +447,10 @@ int32_t handleUserDefinedFunc(SSqlObj* pSql, struct SSqlInfo* pInfo) {
}
createInfo->name.z[createInfo->name.n] = 0;
// funcname's naming rule is same to column
if (validateColumnName(createInfo->name.z) != TSDB_CODE_SUCCESS) {
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg1);
}
strdequote(createInfo->name.z);
if (strlen(createInfo->name.z) >= TSDB_FUNC_NAME_LEN) {
......@@ -1607,7 +1608,8 @@ int32_t validateOneTag(SSqlCmd* pCmd, TAOS_FIELD* pTagField) {
for (int32_t i = 0; i < numOfTags + numOfCols; ++i) {
if (strncasecmp(pTagField->name, pSchema[i].name, sizeof(pTagField->name) - 1) == 0) {
return tscErrorMsgWithCode(TSDB_CODE_TSC_DUP_COL_NAMES, tscGetErrorMsgPayload(pCmd), pTagField->name, NULL);
//return tscErrorMsgWithCode(TSDB_CODE_TSC_DUP_COL_NAMES, tscGetErrorMsgPayload(pCmd), pTagField->name, NULL);
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), "duplicated column names");
}
}
......@@ -1660,7 +1662,8 @@ int32_t validateOneColumn(SSqlCmd* pCmd, TAOS_FIELD* pColField) {
// field name must be unique
for (int32_t i = 0; i < numOfTags + numOfCols; ++i) {
if (strncasecmp(pColField->name, pSchema[i].name, sizeof(pColField->name) - 1) == 0) {
return tscErrorMsgWithCode(TSDB_CODE_TSC_DUP_COL_NAMES, tscGetErrorMsgPayload(pCmd), pColField->name, NULL);
//return tscErrorMsgWithCode(TSDB_CODE_TSC_DUP_COL_NAMES, tscGetErrorMsgPayload(pCmd), pColField->name, NULL);
return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), "duplicated column names");
}
}
......@@ -1680,34 +1683,6 @@ static bool has(SArray* pFieldList, int32_t startIdx, const char* name) {
static char* getAccountId(SSqlObj* pSql) { return pSql->pTscObj->acctId; }
static char* cloneCurrentDBName(SSqlObj* pSql) {
char *p = NULL;
HttpContext *pCtx = NULL;
pthread_mutex_lock(&pSql->pTscObj->mutex);
STscObj *pTscObj = pSql->pTscObj;
switch (pTscObj->from) {
case TAOS_REQ_FROM_HTTP:
pCtx = pSql->param;
if (pCtx && pCtx->db[0] != '\0') {
char db[TSDB_ACCT_ID_LEN + TSDB_DB_NAME_LEN] = {0};
int32_t len = sprintf(db, "%s%s%s", pTscObj->acctId, TS_PATH_DELIMITER, pCtx->db);
assert(len <= sizeof(db));
p = strdup(db);
}
break;
default:
break;
}
if (p == NULL) {
p = strdup(pSql->pTscObj->db);
}
pthread_mutex_unlock(&pSql->pTscObj->mutex);
return p;
}
/* length limitation, strstr cannot be applied */
static int32_t getDelimiterIndex(SStrToken* pTableName) {
for (uint32_t i = 0; i < pTableName->n; ++i) {
......@@ -8812,7 +8787,7 @@ int32_t validateSqlNode(SSqlObj* pSql, SSqlNode* pSqlNode, SQueryInfo* pQueryInf
* select server_status();
* select server_version();
* select client_version();
* select current_database();
* select database();
*/
if (pSqlNode->from == NULL) {
assert(pSqlNode->fillType == NULL && pSqlNode->pGroupby == NULL && pSqlNode->pWhere == NULL &&
......
......@@ -1403,7 +1403,6 @@ int32_t tscBuildSyncDbReplicaMsg(SSqlObj* pSql, SSqlInfo *pInfo) {
}
int32_t tscBuildShowMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
STscObj *pObj = pSql->pTscObj;
SSqlCmd *pCmd = &pSql->cmd;
pCmd->msgType = TSDB_MSG_TYPE_CM_SHOW;
pCmd->payloadLen = sizeof(SShowMsg) + 100;
......@@ -1426,9 +1425,9 @@ int32_t tscBuildShowMsg(SSqlObj *pSql, SSqlInfo *pInfo) {
}
if (tNameIsEmpty(&pTableMetaInfo->name)) {
pthread_mutex_lock(&pObj->mutex);
tstrncpy(pShowMsg->db, pObj->db, sizeof(pShowMsg->db));
pthread_mutex_unlock(&pObj->mutex);
char *p = cloneCurrentDBName(pSql);
tstrncpy(pShowMsg->db, p, sizeof(pShowMsg->db));
tfree(p);
} else {
tNameGetFullDbName(&pTableMetaInfo->name, pShowMsg->db);
}
......@@ -2924,13 +2923,15 @@ int32_t tscGetTableMetaImpl(SSqlObj* pSql, STableMetaInfo *pTableMetaInfo, bool
// just make runtime happy
if (pTableMetaInfo->tableMetaCapacity != 0 && pTableMetaInfo->pTableMeta != NULL) {
memset(pTableMetaInfo->pTableMeta, 0, pTableMetaInfo->tableMetaCapacity);
}
}
if (NULL == taosHashGetCloneExt(tscTableMetaMap, name, len, NULL, (void **)&(pTableMetaInfo->pTableMeta), &pTableMetaInfo->tableMetaCapacity)) {
tfree(pTableMetaInfo->pTableMeta);
}
STableMeta* pMeta = pTableMetaInfo->pTableMeta;
STableMeta* pSTMeta = (STableMeta *)(pSql->pBuf);
if (pMeta && pMeta->id.uid > 0) {
// in case of child table, here only get the
if (pMeta->tableType == TSDB_CHILD_TABLE) {
......@@ -2940,6 +2941,8 @@ int32_t tscGetTableMetaImpl(SSqlObj* pSql, STableMetaInfo *pTableMetaInfo, bool
return getTableMetaFromMnode(pSql, pTableMetaInfo, autocreate);
}
}
tscDebug("0x%"PRIx64 " %s retrieve tableMeta from cache, numOfCols:%d, numOfTags:%d", pSql->self, name, pMeta->tableInfo.numOfColumns, pMeta->tableInfo.numOfTags);
return TSDB_CODE_SUCCESS;
}
......
......@@ -29,6 +29,7 @@
#include "tsclient.h"
#include "ttimer.h"
#include "ttokendef.h"
#include "httpInt.h"
static void freeQueryInfoImpl(SQueryInfo* pQueryInfo);
......@@ -5112,3 +5113,31 @@ void tscRemoveCachedTableMeta(STableMetaInfo* pTableMetaInfo, uint64_t id) {
taosHashRemove(tscTableMetaMap, fname, len);
tscDebug("0x%"PRIx64" remove table meta %s, numOfRemain:%d", id, fname, (int32_t) taosHashGetSize(tscTableMetaMap));
}
char* cloneCurrentDBName(SSqlObj* pSql) {
char *p = NULL;
HttpContext *pCtx = NULL;
pthread_mutex_lock(&pSql->pTscObj->mutex);
STscObj *pTscObj = pSql->pTscObj;
switch (pTscObj->from) {
case TAOS_REQ_FROM_HTTP:
pCtx = pSql->param;
if (pCtx && pCtx->db[0] != '\0') {
char db[TSDB_ACCT_ID_LEN + TSDB_DB_NAME_LEN] = {0};
int32_t len = sprintf(db, "%s%s%s", pTscObj->acctId, TS_PATH_DELIMITER, pCtx->db);
assert(len <= sizeof(db));
p = strdup(db);
}
break;
default:
break;
}
if (p == NULL) {
p = strdup(pSql->pTscObj->db);
}
pthread_mutex_unlock(&pSql->pTscObj->mutex);
return p;
}
......@@ -130,6 +130,7 @@ extern int32_t tsHttpMaxThreads;
extern int8_t tsHttpEnableCompress;
extern int8_t tsHttpEnableRecordSql;
extern int8_t tsTelegrafUseFieldNum;
extern int8_t tsHttpDbNameMandatory;
// mqtt
extern int8_t tsEnableMqttModule;
......
......@@ -175,6 +175,7 @@ int32_t tsHttpMaxThreads = 2;
int8_t tsHttpEnableCompress = 1;
int8_t tsHttpEnableRecordSql = 0;
int8_t tsTelegrafUseFieldNum = 0;
int8_t tsHttpDbNameMandatory = 0;
// mqtt
int8_t tsEnableMqttModule = 0; // not finished yet, not started it by default
......@@ -1287,6 +1288,16 @@ static void doInitGlobalConfig(void) {
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
cfg.option = "httpDbNameMandatory";
cfg.ptr = &tsHttpDbNameMandatory;
cfg.valType = TAOS_CFG_VTYPE_INT8;
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG;
cfg.minValue = 0;
cfg.maxValue = 1;
cfg.ptrLength = 0;
cfg.unitType = TAOS_CFG_UTYPE_NONE;
taosInitConfigOption(cfg);
// debug flag
cfg.option = "numOfLogLines";
cfg.ptr = &tsNumOfLogLines;
......
......@@ -117,7 +117,6 @@
<exclude>**/DatetimeBefore1970Test.java</exclude>
<exclude>**/FailOverTest.java</exclude>
<exclude>**/InvalidResultSetPointerTest.java</exclude>
<exclude>**/RestfulConnectionTest.java</exclude>
<exclude>**/TSDBJNIConnectorTest.java</exclude>
<exclude>**/TaosInfoMonitorTest.java</exclude>
<exclude>**/UnsignedNumberJniTest.java</exclude>
......
......@@ -40,13 +40,13 @@ public class TSDBError {
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_SUBSCRIBE_FAILED, "failed to create subscription");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_UNSUPPORTED_ENCODING, "Unsupported encoding");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_TDENGINE_ERROR, "internal error of database");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_TDENGINE_ERROR, "internal error of database, please see taoslog for more details");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_CONNECTION_NULL, "JNI connection is NULL");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_RESULT_SET_NULL, "JNI result set is NULL");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_NUM_OF_FIELDS_0, "invalid num of fields");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_SQL_NULL, "empty sql string");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_FETCH_END, "fetch to the end of resultSet");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_OUT_OF_MEMORY, "JNI alloc memory failed");
TSDBErrorMap.put(TSDBErrorNumbers.ERROR_JNI_OUT_OF_MEMORY, "JNI alloc memory failed, please see taoslog for more details");
}
public static SQLException createSQLException(int errorCode) {
......
......@@ -278,25 +278,20 @@ public class TSDBJNIConnector {
private native int validateCreateTableSqlImp(long connection, byte[] sqlBytes);
public long prepareStmt(String sql) throws SQLException {
long stmt;
try {
stmt = prepareStmtImp(sql.getBytes(), this.taos);
} catch (Exception e) {
e.printStackTrace();
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_ENCODING);
}
long stmt = prepareStmtImp(sql.getBytes(), this.taos);
if (stmt == TSDBConstants.JNI_CONNECTION_NULL) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_JNI_CONNECTION_NULL);
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_JNI_CONNECTION_NULL, "connection already closed");
}
if (stmt == TSDBConstants.JNI_SQL_NULL) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_JNI_SQL_NULL);
}
if (stmt == TSDBConstants.JNI_OUT_OF_MEMORY) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_JNI_OUT_OF_MEMORY);
}
if (stmt == TSDBConstants.JNI_TDENGINE_ERROR) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_JNI_TDENGINE_ERROR);
}
return stmt;
}
......@@ -313,8 +308,7 @@ public class TSDBJNIConnector {
private native int setBindTableNameImp(long stmt, String name, long conn);
public void setBindTableNameAndTags(long stmt, String tableName, int numOfTags, ByteBuffer tags, ByteBuffer typeList, ByteBuffer lengthList, ByteBuffer nullList) throws SQLException {
int code = setTableNameTagsImp(stmt, tableName, numOfTags, tags.array(), typeList.array(), lengthList.array(),
nullList.array(), this.taos);
int code = setTableNameTagsImp(stmt, tableName, numOfTags, tags.array(), typeList.array(), lengthList.array(), nullList.array(), this.taos);
if (code != TSDBConstants.JNI_SUCCESS) {
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN, "failed to bind table name and corresponding tags");
}
......
......@@ -32,6 +32,7 @@ import java.util.List;
import com.taosdata.jdbc.utils.NullType;
public class TSDBResultSetBlockData {
private static final int BINARY_LENGTH_OFFSET = 2;
private int numOfRows = 0;
private int rowIndex = 0;
......@@ -404,10 +405,8 @@ public class TSDBResultSetBlockData {
case TSDBConstants.TSDB_DATA_TYPE_BINARY: {
ByteBuffer bb = (ByteBuffer) this.colData.get(col);
bb.position(fieldSize * this.rowIndex);
bb.position((fieldSize + BINARY_LENGTH_OFFSET) * this.rowIndex);
int length = bb.getShort();
byte[] dest = new byte[length];
bb.get(dest, 0, length);
if (NullType.isBinaryNull(dest, length)) {
......@@ -419,16 +418,13 @@ public class TSDBResultSetBlockData {
case TSDBConstants.TSDB_DATA_TYPE_NCHAR: {
ByteBuffer bb = (ByteBuffer) this.colData.get(col);
bb.position(fieldSize * this.rowIndex);
bb.position((fieldSize + BINARY_LENGTH_OFFSET) * this.rowIndex);
int length = bb.getShort();
byte[] dest = new byte[length];
bb.get(dest, 0, length);
if (NullType.isNcharNull(dest, length)) {
return null;
}
try {
String charset = TaosGlobalConfig.getCharset();
return new String(dest, charset);
......
package com.taosdata.jdbc.cases;
import org.junit.Before;
import org.junit.Test;
import java.sql.*;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
public class UseNowInsertTimestampTest {
String url = "jdbc:TAOS://127.0.0.1:6030/?user=root&password=taosdata";
@Test
public void millisec() {
try (Connection conn = DriverManager.getConnection(url)) {
Statement stmt = conn.createStatement();
stmt.execute("drop database if exists test");
stmt.execute("create database if not exists test precision 'ms'");
stmt.execute("use test");
stmt.execute("create table weather(ts timestamp, f1 int)");
stmt.execute("insert into weather values(now, 1)");
ResultSet rs = stmt.executeQuery("select * from weather");
rs.next();
Timestamp ts = rs.getTimestamp("ts");
assertEquals(13, Long.toString(ts.getTime()).length());
int nanos = ts.getNanos();
assertEquals(0, nanos % 1000_000);
stmt.execute("drop database if exists test");
} catch (SQLException e) {
e.printStackTrace();
}
}
@Test
public void microsec() {
try (Connection conn = DriverManager.getConnection(url)) {
Statement stmt = conn.createStatement();
stmt.execute("drop database if exists test");
stmt.execute("create database if not exists test precision 'us'");
stmt.execute("use test");
stmt.execute("create table weather(ts timestamp, f1 int)");
stmt.execute("insert into weather values(now, 1)");
ResultSet rs = stmt.executeQuery("select * from weather");
rs.next();
Timestamp ts = rs.getTimestamp("ts");
int nanos = ts.getNanos();
assertEquals(0, nanos % 1000);
stmt.execute("drop database if exists test");
} catch (SQLException e) {
e.printStackTrace();
}
}
@Test
public void nanosec() {
try (Connection conn = DriverManager.getConnection(url)) {
Statement stmt = conn.createStatement();
stmt.execute("drop database if exists test");
stmt.execute("create database if not exists test precision 'ns'");
stmt.execute("use test");
stmt.execute("create table weather(ts timestamp, f1 int)");
stmt.execute("insert into weather values(now, 1)");
ResultSet rs = stmt.executeQuery("select * from weather");
rs.next();
Timestamp ts = rs.getTimestamp("ts");
int nanos = ts.getNanos();
assertTrue(nanos % 1000 != 0);
stmt.execute("drop database if exists test");
} catch (SQLException e) {
e.printStackTrace();
}
}
}
......@@ -9,6 +9,8 @@ import org.junit.Test;
import java.sql.*;
import java.util.Properties;
import static org.junit.Assert.assertEquals;
public class RestfulConnectionTest {
private static final String host = "127.0.0.1";
......@@ -26,7 +28,7 @@ public class RestfulConnectionTest {
ResultSet rs = stmt.executeQuery("select server_status()");
rs.next();
int status = rs.getInt("server_status()");
Assert.assertEquals(1, status);
assertEquals(1, status);
} catch (SQLException e) {
e.printStackTrace();
}
......@@ -38,7 +40,7 @@ public class RestfulConnectionTest {
ResultSet rs = pstmt.executeQuery();
rs.next();
int status = rs.getInt("server_status()");
Assert.assertEquals(1, status);
assertEquals(1, status);
}
@Test(expected = SQLFeatureNotSupportedException.class)
......@@ -49,7 +51,7 @@ public class RestfulConnectionTest {
@Test
public void nativeSQL() throws SQLException {
String nativeSQL = conn.nativeSQL("select * from log.log");
Assert.assertEquals("select * from log.log", nativeSQL);
assertEquals("select * from log.log", nativeSQL);
}
@Test
......@@ -87,7 +89,7 @@ public class RestfulConnectionTest {
public void getMetaData() throws SQLException {
DatabaseMetaData meta = conn.getMetaData();
Assert.assertNotNull(meta);
Assert.assertEquals("com.taosdata.jdbc.rs.RestfulDriver", meta.getDriverName());
assertEquals("com.taosdata.jdbc.rs.RestfulDriver", meta.getDriverName());
}
@Test
......@@ -103,25 +105,25 @@ public class RestfulConnectionTest {
@Test
public void setCatalog() throws SQLException {
conn.setCatalog("test");
Assert.assertEquals("test", conn.getCatalog());
assertEquals("test", conn.getCatalog());
}
@Test
public void getCatalog() throws SQLException {
conn.setCatalog("log");
Assert.assertEquals("log", conn.getCatalog());
assertEquals("log", conn.getCatalog());
}
@Test(expected = SQLFeatureNotSupportedException.class)
public void setTransactionIsolation() throws SQLException {
conn.setTransactionIsolation(Connection.TRANSACTION_NONE);
Assert.assertEquals(Connection.TRANSACTION_NONE, conn.getTransactionIsolation());
assertEquals(Connection.TRANSACTION_NONE, conn.getTransactionIsolation());
conn.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
}
@Test
public void getTransactionIsolation() throws SQLException {
Assert.assertEquals(Connection.TRANSACTION_NONE, conn.getTransactionIsolation());
assertEquals(Connection.TRANSACTION_NONE, conn.getTransactionIsolation());
}
@Test
......@@ -140,7 +142,7 @@ public class RestfulConnectionTest {
ResultSet rs = stmt.executeQuery("select server_status()");
rs.next();
int status = rs.getInt("server_status()");
Assert.assertEquals(1, status);
assertEquals(1, status);
conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
}
......@@ -152,7 +154,7 @@ public class RestfulConnectionTest {
ResultSet rs = pstmt.executeQuery();
rs.next();
int status = rs.getInt("server_status()");
Assert.assertEquals(1, status);
assertEquals(1, status);
conn.prepareStatement("select server_status", ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY);
}
......@@ -175,13 +177,13 @@ public class RestfulConnectionTest {
@Test(expected = SQLFeatureNotSupportedException.class)
public void setHoldability() throws SQLException {
conn.setHoldability(ResultSet.HOLD_CURSORS_OVER_COMMIT);
Assert.assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, conn.getHoldability());
assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, conn.getHoldability());
conn.setHoldability(ResultSet.CLOSE_CURSORS_AT_COMMIT);
}
@Test
public void getHoldability() throws SQLException {
Assert.assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, conn.getHoldability());
assertEquals(ResultSet.HOLD_CURSORS_OVER_COMMIT, conn.getHoldability());
}
@Test(expected = SQLFeatureNotSupportedException.class)
......@@ -210,7 +212,7 @@ public class RestfulConnectionTest {
ResultSet rs = stmt.executeQuery("select server_status()");
rs.next();
int status = rs.getInt("server_status()");
Assert.assertEquals(1, status);
assertEquals(1, status);
conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY, ResultSet.HOLD_CURSORS_OVER_COMMIT);
}
......@@ -222,7 +224,7 @@ public class RestfulConnectionTest {
ResultSet rs = pstmt.executeQuery();
rs.next();
int status = rs.getInt("server_status()");
Assert.assertEquals(1, status);
assertEquals(1, status);
conn.prepareStatement("select server_status", ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY, ResultSet.HOLD_CURSORS_OVER_COMMIT);
}
......@@ -299,11 +301,11 @@ public class RestfulConnectionTest {
Properties info = conn.getClientInfo();
String charset = info.getProperty(TSDBDriver.PROPERTY_KEY_CHARSET);
Assert.assertEquals("UTF-8", charset);
assertEquals("UTF-8", charset);
String locale = info.getProperty(TSDBDriver.PROPERTY_KEY_LOCALE);
Assert.assertEquals("en_US.UTF-8", locale);
assertEquals("en_US.UTF-8", locale);
String timezone = info.getProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE);
Assert.assertEquals("UTC-8", timezone);
assertEquals("UTC-8", timezone);
}
@Test
......@@ -313,11 +315,11 @@ public class RestfulConnectionTest {
conn.setClientInfo(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
String charset = conn.getClientInfo(TSDBDriver.PROPERTY_KEY_CHARSET);
Assert.assertEquals("UTF-8", charset);
assertEquals("UTF-8", charset);
String locale = conn.getClientInfo(TSDBDriver.PROPERTY_KEY_LOCALE);
Assert.assertEquals("en_US.UTF-8", locale);
assertEquals("en_US.UTF-8", locale);
String timezone = conn.getClientInfo(TSDBDriver.PROPERTY_KEY_TIME_ZONE);
Assert.assertEquals("UTC-8", timezone);
assertEquals("UTC-8", timezone);
}
@Test(expected = SQLFeatureNotSupportedException.class)
......@@ -345,14 +347,15 @@ public class RestfulConnectionTest {
conn.abort(null);
}
@Test(expected = SQLFeatureNotSupportedException.class)
@Test
public void setNetworkTimeout() throws SQLException {
conn.setNetworkTimeout(null, 1000);
}
@Test(expected = SQLFeatureNotSupportedException.class)
@Test
public void getNetworkTimeout() throws SQLException {
conn.getNetworkTimeout();
int timeout = conn.getNetworkTimeout();
assertEquals(0, timeout);
}
@Test
......
......@@ -102,9 +102,7 @@ _libtaos.taos_get_client_info.restype = c_char_p
def taos_get_client_info():
# type: () -> str
"""Get client version info.
获取客户端版本信息。
"""
"""Get client version info."""
return _libtaos.taos_get_client_info().decode()
......@@ -114,6 +112,7 @@ _libtaos.taos_get_server_info.argtypes = (c_void_p,)
def taos_get_server_info(connection):
# type: (c_void_p) -> str
"""Get server version as string."""
return _libtaos.taos_get_server_info(connection).decode()
......@@ -134,11 +133,10 @@ _libtaos.taos_connect.argtypes = c_char_p, c_char_p, c_char_p, c_char_p, c_uint1
def taos_connect(host=None, user="root", password="taosdata", db=None, port=0):
# type: (None|str, str, str, None|str, int) -> c_void_p
"""Create TDengine database connection.
创建数据库连接,初始化连接上下文。其中需要用户提供的参数包含:
- host: server hostname/FQDN, TDengine管理主节点的FQDN
- user: user name/用户名
- password: user password / 用户密码
- host: server hostname/FQDN
- user: user name
- password: user password
- db: database name (optional)
- port: server port
......@@ -187,11 +185,10 @@ _libtaos.taos_connect_auth.argtypes = c_char_p, c_char_p, c_char_p, c_char_p, c_
def taos_connect_auth(host=None, user="root", auth="", db=None, port=0):
# type: (None|str, str, str, None|str, int) -> c_void_p
"""
创建数据库连接,初始化连接上下文。其中需要用户提供的参数包含:
"""Connect server with auth token.
- host: server hostname/FQDN, TDengine管理主节点的FQDN
- user: user name/用户名
- host: server hostname/FQDN
- user: user name
- auth: base64 encoded auth token
- db: database name (optional)
- port: server port
......
此差异已折叠。
......@@ -1246,13 +1246,13 @@ static int32_t mnodeAddSuperTableTag(SMnodeMsg *pMsg, SSchema schema[], int32_t
if (mnodeFindSuperTableColumnIndex(pStable, schema[i].name) > 0) {
mError("msg:%p, app:%p stable:%s, add tag, column:%s already exist", pMsg, pMsg->rpcMsg.ahandle,
pStable->info.tableId, schema[i].name);
return TSDB_CODE_MND_FIELD_ALREAY_EXIST;
return TSDB_CODE_MND_TAG_ALREAY_EXIST;
}
if (mnodeFindSuperTableTagIndex(pStable, schema[i].name) > 0) {
mError("msg:%p, app:%p stable:%s, add tag, tag:%s already exist", pMsg, pMsg->rpcMsg.ahandle,
pStable->info.tableId, schema[i].name);
return TSDB_CODE_MND_TAG_ALREAY_EXIST;
return TSDB_CODE_MND_FIELD_ALREAY_EXIST;
}
}
......
......@@ -6,6 +6,7 @@ INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/deps/cJson/inc)
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/deps/lz4/inc)
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/src/client/inc)
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/src/query/inc)
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/src/common/inc)
INCLUDE_DIRECTORIES(inc)
AUX_SOURCE_DIRECTORY(src SRC)
......
......@@ -19,6 +19,7 @@
#include "httpLog.h"
#include "httpRestHandle.h"
#include "httpRestJson.h"
#include "tglobal.h"
static HttpDecodeMethod restDecodeMethod = {"rest", restProcessRequest};
static HttpDecodeMethod restDecodeMethod2 = {"restful", restProcessRequest};
......@@ -111,6 +112,14 @@ bool restProcessSqlRequest(HttpContext* pContext, int32_t timestampFmt) {
pContext->db[0] = '\0';
HttpString *path = &pContext->parser->path[REST_USER_USEDB_URL_POS];
if (tsHttpDbNameMandatory) {
if (path->pos == 0) {
httpError("context:%p, fd:%d, user:%s, database name is mandatory", pContext, pContext->fd, pContext->user);
httpSendErrorResp(pContext, TSDB_CODE_HTTP_INVALID_URL);
return false;
}
}
if (path->pos > 0 && !(strlen(sql) > 4 && (sql[0] == 'u' || sql[0] == 'U') &&
(sql[1] == 's' || sql[1] == 'S') && (sql[2] == 'e' || sql[2] == 'E') && sql[3] == ' '))
{
......
......@@ -496,7 +496,7 @@ union(Y) ::= union(Z) UNION ALL select(X). { Y = appendSelectClause(Z, X); }
cmd ::= union(X). { setSqlInfo(pInfo, X, NULL, TSDB_SQL_SELECT); }
// Support for the SQL exprssion without from & where subclauses, e.g.,
// select current_database()
// select database()
// select server_version()
// select client_version()
// select server_state()
......
......@@ -3474,19 +3474,19 @@ void filterPrepare(void* expr, void* param) {
if (pInfo->optr == TSDB_RELATION_IN) {
int dummy = -1;
SHashObj *pObj = NULL;
SHashObj *pObj = NULL;
if (pInfo->sch.colId == TSDB_TBNAME_COLUMN_INDEX) {
pObj = taosHashInit(256, taosGetDefaultHashFunction(pInfo->sch.type), true, false);
SArray *arr = (SArray *)(pCond->arr);
for (size_t i = 0; i < taosArrayGetSize(arr); i++) {
char* p = taosArrayGetP(arr, i);
strtolower(varDataVal(p), varDataVal(p));
taosHashPut(pObj, varDataVal(p),varDataLen(p), &dummy, sizeof(dummy));
strntolower_s(varDataVal(p), varDataVal(p), varDataLen(p));
taosHashPut(pObj, varDataVal(p), varDataLen(p), &dummy, sizeof(dummy));
}
} else {
buildFilterSetFromBinary((void **)&pObj, pCond->pz, pCond->nLen);
}
pInfo->q = (char *)pObj;
pInfo->q = (char *)pObj;
} else if (pCond != NULL) {
uint32_t size = pCond->nLen * TSDB_NCHAR_SIZE;
if (size < (uint32_t)pSchema->bytes) {
......
......@@ -32,6 +32,7 @@ char * strnchr(char *haystack, char needle, int32_t len, bool skipquote);
char ** strsplit(char *src, const char *delim, int32_t *num);
char * strtolower(char *dst, const char *src);
char * strntolower(char *dst, const char *src, int32_t n);
char * strntolower_s(char *dst, const char *src, int32_t n);
int64_t strnatoi(char *num, int32_t len);
char * strbetween(char *string, char *begin, char *end);
char * paGetToken(char *src, char **token, int32_t *tokenLen);
......
......@@ -202,7 +202,7 @@ char* strntolower(char *dst, const char *src, int32_t n) {
if (n == 0) {
*p = 0;
return dst;
}
}
for (c = *src++; n-- > 0; c = *src++) {
if (esc) {
esc = 0;
......@@ -224,6 +224,26 @@ char* strntolower(char *dst, const char *src, int32_t n) {
return dst;
}
char* strntolower_s(char *dst, const char *src, int32_t n) {
char *p = dst, c;
assert(dst != NULL);
if (n == 0) {
return NULL;
}
while (n-- > 0) {
c = *src;
if (c >= 'A' && c <= 'Z') {
c -= 'A' - 'a';
}
*p++ = c;
src++;
}
return dst;
}
char *paGetToken(char *string, char **token, int32_t *tokenLen) {
char quote = 0;
......
......@@ -172,6 +172,10 @@ python3 test.py -f tools/taosdemoTestSampleData.py
python3 test.py -f tools/taosdemoTestInterlace.py
python3 test.py -f tools/taosdemoTestQuery.py
# restful test for python
python3 test.py -f restful/restful_bind_db1.py
python3 test.py -f restful/restful_bind_db2.py
# nano support
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoInsert.py
python3 test.py -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.py
......@@ -385,6 +389,7 @@ python3 ./test.py -f insert/schemalessInsert.py
python3 ./test.py -f alter/alterColMultiTimes.py
python3 ./test.py -f query/queryWildcardLength.py
python3 ./test.py -f query/queryTbnameUpperLower.py
python3 ./test.py -f query/query.py
#======================p4-end===============
......
......@@ -25,7 +25,7 @@ class TDTestCase:
self.tables = 10
self.rows = 20
self.columns = 5
self.columns = 100
self.perfix = 't'
self.ts = 1601481600000
......@@ -33,8 +33,8 @@ class TDTestCase:
print("==============step1")
sql = "create table st(ts timestamp, "
for i in range(self.columns - 1):
sql += "c%d int, " % (i + 1)
sql += "c5 int) tags(t1 int)"
sql += "c%d bigint, " % (i + 1)
sql += "c100 bigint) tags(t1 int)"
tdSql.execute(sql)
for i in range(self.tables):
......
......@@ -25,6 +25,18 @@ class TDTestCase:
self.ts = 1538548685000
def bug_6387(self):
tdSql.execute("create database bug6387 ")
tdSql.execute("use bug6387 ")
tdSql.execute("create table test(ts timestamp, c1 int) tags(t1 int)")
for i in range(5000):
sql = "insert into t%d using test tags(1) values " % i
for j in range(21):
sql = sql + "(now+%ds,%d)" % (j ,j )
tdSql.execute(sql)
tdSql.query("select count(*) from test interval(1s) group by tbname")
tdSql.checkData(0,1,1)
def run(self):
tdSql.prepare()
......@@ -121,6 +133,9 @@ class TDTestCase:
tdSql.query("select * from tb0")
tdSql.checkRows(1)
#For jira: https://jira.taosdata.com:18080/browse/TD-6387
self.bug_6387()
def stop(self):
......
......@@ -17,7 +17,6 @@ from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
......@@ -25,7 +24,8 @@ class TDTestCase:
def run(self):
tdSql.prepare()
tdSql.execute("drop database db ")
tdSql.execute("create database if not exists db ")
tdSql.execute("create table cars(ts timestamp, c nchar(2)) tags(t1 nchar(2))")
tdSql.execute("insert into car0 using cars tags('aa') values(now, 'bb');")
tdSql.query("select count(*) from cars where t1 like '%50 90 30 04 00 00%'")
......@@ -35,6 +35,132 @@ class TDTestCase:
tdSql.execute("insert into car1 using test_cars tags('150 90 30 04 00 002') values(now, 'bb');")
tdSql.query("select * from test_cars where t1 like '%50 90 30 04 00 00%'")
tdSql.checkRows(1)
tdSql.execute("create stable st (ts timestamp , id int , name binary(20), data double ) tags (ind int , tagg binary(20)) ")
# check escape about tbname by show tables like
tdSql.execute("create table tb_ using st tags (1 , 'tag_' ) ")
tdSql.execute("insert into tb_ values( now , 1 , 'tbname_' , 1.0)")
tdSql.execute("create table tba using st tags (1 , 'taga' ) ")
tdSql.execute("insert into tba values( now , 1 , 'tbnamea' , 1.0)")
tdSql.query("show tables like 'tb_'")
tdSql.checkRows(2)
tdSql.query("show tables like 'tb\_'")
tdSql.checkRows(1)
# check escape about tbname by show tables like
tdSql.query("select * from st where tbname like 'tb\_'")
tdSql.checkRows(1)
# check escape about regular cols
tdSql.query("select * from st where name like 'tbname\_';")
tdSql.checkRows(1)
# check escape about tags
tdSql.query("select * from st where tagg like 'tag\_';")
tdSql.checkRows(1)
# ======================= check multi escape ===================
tdSql.execute("create table tb_1 using st tags (1 , 'tag_1' ) ")
tdSql.execute("insert into tb_1 values( now , 1 , 'tbname_1' , 1.0)")
tdSql.execute("create table tb_2 using st tags (2 , 'tag_2' )")
tdSql.execute("insert into tb_2 values( now , 2 , 'tbname_2' , 2.0)")
tdSql.execute("create table tb__ using st tags (3 , 'tag__' )")
tdSql.execute("insert into tb__ values( now , 3 , 'tbname__' , 2.0)")
tdSql.execute("create table tb__1 using st tags (3 , 'tag__1' )")
tdSql.execute("insert into tb__1 values( now , 1 , 'tbname__1' , 1.0)")
tdSql.execute("create table tb__2 using st tags (4 , 'tag__2' )")
tdSql.execute("create table tb___ using st tags (5 , 'tag___' )")
tdSql.execute("insert into tb___ values( now , 1 , 'tbname___' , 1.0)")
tdSql.execute("create table tb_d_ using st tags (5 , 'tag_d_' )")
tdSql.execute("insert into tb_d_ values( now , 1 , 'tbname_d_' , 1.0)")
tdSql.execute("create table tb_d__ using st tags (5 , 'tag_d__' )")
tdSql.execute("insert into tb_d__ values( now , 1 , 'tbname_d__' , 1.0)")
tdSql.execute("create table tb____ using st tags (5 , 'tag____' )")
tdSql.execute("insert into tb____ values( now , 1 , 'tbname____' , 1.0)")
tdSql.execute("create table tb__a_ using st tags (5 , 'tag__a_' )")
tdSql.execute("insert into tb__a_ values( now , 1 , 'tbname__a_' , 1.0)")
tdSql.execute("create table tb__ab__ using st tags (5 , 'tag__ab__' )")
tdSql.execute("insert into tb__ab__ values( now , 1 , 'tbname__ab__' , 1.0)")
# check escape about tbname by show tables like
tdSql.query("select * from st where tbname like 'tb__'")
tdSql.checkRows(3)
tdSql.query("select * from st where tbname like 'tb_\_'")
tdSql.checkRows(1)
tdSql.query("select * from st where tbname like 'tb___'")
tdSql.checkRows(4)
tdSql.query("select * from st where tbname like 'tb_\__'")
tdSql.checkRows(3)
tdSql.query("select * from st where tbname like 'tb_\_\_'")
tdSql.checkRows(1)
tdSql.query("select * from st where tbname like 'tb\__\_'")
tdSql.checkRows(1)
tdSql.query("select * from st where tbname like 'tb\__\__'")
tdSql.checkRows(2)
tdSql.query("select * from st where tbname like 'tb\__\_\_'")
tdSql.checkRows(2)
tdSql.query("select * from st where tbname like 'tb\____'")
tdSql.checkRows(3)
tdSql.query("select * from st where tbname like 'tb\_\__\_'")
tdSql.checkRows(2)
tdSql.query("select * from st where tbname like 'tb\_\_\_\_'")
tdSql.checkRows(1)
# check escape about regular cols
tdSql.query("select * from st where name like 'tbname\_\_';")
tdSql.checkRows(1)
tdSql.query("select * from st where name like 'tbname\__';")
tdSql.checkRows(3)
tdSql.query("select * from st where name like 'tbname___';")
tdSql.checkRows(4)
tdSql.query("select * from st where name like 'tbname_\__';")
tdSql.checkRows(3)
tdSql.query("select * from st where name like 'tbname_\_\_';")
tdSql.checkRows(1)
tdSql.query("select * from st where name like 'tbname\_\__';")
tdSql.checkRows(2)
tdSql.query("select * from st where name like 'tbname____';")
tdSql.checkRows(3)
tdSql.query("select * from st where name like 'tbname\_\___';")
tdSql.checkRows(2)
tdSql.query("select * from st where name like 'tbname\_\_\__';")
tdSql.checkRows(1)
tdSql.query("select * from st where name like 'tbname\_\__\_';")
tdSql.checkRows(2)
tdSql.query("select name from st where name like 'tbname\_\_\__';")
tdSql.checkData(0,0 "tbname____")
# check escape about tags
tdSql.query("select * from st where tagg like 'tag\_';")
tdSql.checkRows(1)
tdSql.query("select * from st where tagg like 'tag\_\_';")
tdSql.checkRows(1)
tdSql.query("select * from st where tagg like 'tag\__';")
tdSql.checkRows(3)
tdSql.query("select * from st where tagg like 'tag___';")
tdSql.checkRows(4)
tdSql.query("select * from st where tagg like 'tag_\__';")
tdSql.checkRows(3)
tdSql.query("select * from st where tagg like 'tag_\_\_';")
tdSql.checkRows(1)
tdSql.query("select * from st where tagg like 'tag\_\__';")
tdSql.checkRows(2)
tdSql.query("select * from st where tagg like 'tag____';")
tdSql.checkRows(3)
tdSql.query("select * from st where tagg like 'tag\_\___';")
tdSql.checkRows(2)
tdSql.query("select * from st where tagg like 'tag\_\_\__';")
tdSql.checkRows(1)
tdSql.query("select * from st where tagg like 'tag\_\__\_';")
tdSql.checkRows(2)
tdSql.query("select * from st where tagg like 'tag\_\__\_';")
tdSql.checkData(0,0 "tag__a_")
os.system("rm -rf ./*.py.sql")
def stop(self):
tdSql.close()
......
# #################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
# #################################################################
# -*- coding: utf-8 -*-
# TODO: after TD-4518 and TD-4510 is resolved, add the exception test case for these situations
from distutils.log import error
import sys
from requests.api import head
from requests.models import Response
from util.log import *
from util.cases import *
from util.sql import *
import time, datetime
import requests, json
import threading
import string
import random
def check_unbind_db(url, data, header):
resp = requests.post(url, data, headers = header )
resp.encoding='utf-8'
resp = eval(resp.text)
status = resp['status']
desc = resp['desc']
sqls = data
if status=="error" and desc == "invalid url format":
print(" %s : check pass" %sqls)
else:
printf(" error occured , ")
sys.exit()
def check_bind_db(url, data, header):
resp = requests.post(url, data, headers = header )
resp.encoding='utf-8'
resp_dict = eval(resp.text)
status = resp_dict['status']
if status =="succ":
print("%s run success!"%data)
# print(resp.text)
else :
print("%s run failed !"%data)
print(resp.text)
sys.exit()
class TDTestCase():
updatecfgDict={'httpDbNameMandatory':1}
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
tdSql.execute('reset query cache')
tdSql.execute('drop database if exists test')
tdSql.execute('drop database if exists db')
tdSql.execute('drop database if exists des')
tdSql.execute('create database test')
tdSql.execute('create database des')
header = {'Authorization': 'Basic cm9vdDp0YW9zZGF0YQ=='}
url = "http://127.0.0.1:6041/rest/sql/"
# test with no bind databases
sqls = ["show databases;",
"use test",
"show tables;",
"show dnodes;",
"show vgroups;",
"create database db;",
"drop database db;",
"select client_version();" ,
"use test",
"ALTER DATABASE test COMP 2;",
"create table tb (ts timestamp, id int , data double)",
"insert into tb values (now , 1, 1.0) ",
"select * from tb",
"show test.tables",
"show tables",
"insert into tb values (now , 2, 2.0) ",
"create table test.tb (ts timestamp, id int , data double)",
"insert into test.tb values (now , 2, 2.0) ",
"select * from tb",
"select * from test.tb"]
for sql in sqls:
print("===================")
check_unbind_db(url,sql,header)
print("==================="*5)
print(" check bind db about restful ")
print("==================="*5)
url = "http://127.0.0.1:6041/rest/sql/des"
for sql in sqls:
print("===================")
check_bind_db(url,sql,header)
# check data
tdSql.query("select * from test.tb")
tdSql.checkRows(1)
tdSql.query("select * from des.tb")
tdSql.checkRows(2)
os.system('sudo timedatectl set-ntp on')
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
# #################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
# #################################################################
# -*- coding: utf-8 -*-
# TODO: after TD-4518 and TD-4510 is resolved, add the exception test case for these situations
from distutils.log import error
import sys
from requests.api import head
from requests.models import Response
from util.log import *
from util.cases import *
from util.sql import *
import time, datetime
import requests, json
import threading
import string
import random
def check_res(url, data, header):
resp = requests.post(url, data, headers = header )
resp.encoding='utf-8'
resp_dict = eval(resp.text)
status = resp_dict['status']
if status =="succ":
print("%s run success!"%data)
# print(resp.text)
else :
print("%s run failed !"%data)
print(resp.text)
sys.exit()
class TDTestCase():
# updatecfgDict={'httpDbNameMandatory':0}
def init(self, conn, logSql):
tdLog.debug("start to execute %s" % __file__)
tdSql.init(conn.cursor(), logSql)
def run(self):
tdSql.prepare()
tdSql.execute('reset query cache')
tdSql.execute('drop database if exists test')
tdSql.execute('drop database if exists db')
tdSql.execute('drop database if exists des')
tdSql.execute('create database test')
tdSql.execute('create database des')
header = {'Authorization': 'Basic cm9vdDp0YW9zZGF0YQ=='}
url = "http://127.0.0.1:6041/rest/sql/"
# test with no bind databases
sqls = ["show databases;",
"use test",
"show tables;",
"show dnodes;",
"show vgroups;",
"create database db;",
"drop database db;",
"select client_version();" ,
"use test",
"ALTER DATABASE test COMP 2;",
"create table tb (ts timestamp, id int , data double)",
"insert into tb values (now , 1, 1.0) ",
"select * from tb",
"show test.tables",
"show tables",
"insert into tb values (now , 2, 2.0) ",
"create table test.tb (ts timestamp, id int , data double)",
"insert into test.tb values (now , 3, 3.0) ",
"select * from tb",
"select * from test.tb",
"create table des.tb (ts timestamp, id int , data double)",
"insert into des.tb values (now , 3, 3.0)"]
for sql in sqls:
print("===================")
if sql == "create table test.tb (ts timestamp, id int , data double)":
resp = requests.post(url, sql, headers = header )
print(resp.text)
print ("%s run occur error as expect ,check pass!" %(sql))
else:
check_res(url,sql,header)
tdSql.query("select * from test.tb")
tdSql.checkRows(3)
tdSql.query("select * from des.tb")
tdSql.checkRows(1)
print("==================="*5)
print(" check bind db about restful ")
print("==================="*5)
tdSql.execute('reset query cache')
tdSql.execute('drop database if exists test')
tdSql.execute('drop database if exists db')
tdSql.execute('drop database if exists des')
tdSql.execute('create database test')
tdSql.execute('create database des')
url = "http://127.0.0.1:6041/rest/sql/des"
for sql in sqls:
print("===================")
if sql in ["create table des.tb (ts timestamp, id int , data double)"]:
resp = requests.post(url, sql, headers = header )
print(resp.text)
print ("%s run occur error as expect ,check pass!" %(sql))
else:
check_res(url,sql,header)
# check data
tdSql.query("select * from test.tb")
tdSql.checkRows(1)
tdSql.query("select * from des.tb")
tdSql.checkRows(3)
os.system('sudo timedatectl set-ntp on')
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())
......@@ -21,7 +21,15 @@ import shutil
import pandas as pd
from util.log import *
def _parse_datetime(timestr):
try:
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S.%f')
except ValueError:
pass
try:
return datetime.datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S')
except ValueError:
pass
class TDSql:
def __init__(self):
......@@ -181,7 +189,7 @@ class TDSql:
tdLog.info("sql:%s, row:%d col:%d data:%d == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
else:
if self.queryResult[row][col] == datetime.datetime.fromisoformat(data):
if self.queryResult[row][col] == _parse_datetime(data):
tdLog.info("sql:%s, row:%d col:%d data:%s == expect:%s" %
(self.sql, row, col, self.queryResult[row][col], data))
return
......
system sh/stop_dnodes.sh
sleep 2000
system sh/deploy.sh -n dnode1 -i 1
system sh/cfg.sh -n dnode1 -c walLevel -v 1
system sh/cfg.sh -n dnode1 -c http -v 1
system sh/cfg.sh -n dnode1 -c httpDbNameMandatory -v 1
system sh/exec.sh -n dnode1 -s start
sleep 2000
sql connect
sql drop database if exists db
print ============================ dnode1 start
print =============== step1 - login
system_content curl 127.0.0.1:7111/rest/login/root/taosdata
print curl 127.0.0.1:7111/rest/login/root/taosdata -----> $system_content
if $system_content != @{"status":"succ","code":0,"desc":"/KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04"}@ then
return -1
endi
print =============== step2 - execute sql without db_name
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'show databases' 127.0.0.1:7111/rest/sql
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'show databases' 127.0.0.1:7111/rest/sql
if $system_content != @{"status":"error","code":4354,"desc":"invalid url format"}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database if not exists db' 127.0.0.1:7111/rest/sql
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database if not exists db' 127.0.0.1:7111/rest/sql
if $system_content != @{"status":"error","code":4354,"desc":"invalid url format"}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create table table_rest (ts timestamp, i int)' 127.0.0.1:7111/rest/sql
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create table table_rest (ts timestamp, i int)' 127.0.0.1:7111/rest/sql
if $system_content != @{"status":"error","code":4354,"desc":"invalid url format"}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'insert into table_rest (now, 1)' 127.0.0.1:7111/rest/sql
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'insert into table_rest values (now, 1)' 127.0.0.1:7111/rest/sql
if $system_content != @{"status":"error","code":4354,"desc":"invalid url format"}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from table_rest' 127.0.0.1:7111/rest/sql
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from table_rest' 127.0.0.1:7111/rest/sql
if $system_content != @{"status":"error","code":4354,"desc":"invalid url format"}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'drop database if exists db' 127.0.0.1:7111/rest/sql
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'drop database if exists db' 127.0.0.1:7111/rest/sql
if $system_content != @{"status":"error","code":4354,"desc":"invalid url format"}@ then
return -1
endi
print =============== step3 - execute sql with db_name
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'show databases' 127.0.0.1:7111/rest/sql/databases
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'show databases' 127.0.0.1:7111/rest/sql/databases
if $system_content != @{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[],"rows":0}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database if not exists db' 127.0.0.1:7111/rest/sql/db
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database if not exists db' 127.0.0.1:7111/rest/sql/db
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[0]],"rows":1}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create table table_rest (ts timestamp, i int)' 127.0.0.1:7111/rest/sql/db
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create table table_rest (ts timestamp, i int)' 127.0.0.1:7111/rest/sql/db
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[0]],"rows":1}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'insert into table_rest (now, 1)' 127.0.0.1:7111/rest/sql/db
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'insert into table_rest values (1629904789233, 1)' 127.0.0.1:7111/rest/sql/db
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[1]],"rows":1}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from table_rest' 127.0.0.1:7111/rest/sql/db
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from table_rest' 127.0.0.1:7111/rest/sql/db
if $system_content != @{"status":"succ","head":["ts","i"],"column_meta":[["ts",9,8],["i",4,4]],"data":[["2021-08-25 23:19:49.233",1]],"rows":1}@ then
return -1
endi
print curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'drop database if exists db' 127.0.0.1:7111/rest/sql/db
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'drop database if exists db' 127.0.0.1:7111/rest/sql/db
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[0]],"rows":1}@ then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册