提交 9456692a authored 作者: 陈世营's avatar 陈世营

Initial commit

上级
*.class
# Package Files #
*.jar
*.war
*.ear
*.versionsBackup
*.iml
.idea/
target/
logs/
log/
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [2018] Leaf
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Leaf
> There are no two identical leaves in the world.
>
> ​ — Leibnitz
[中文文档](./README_CN.md) | [English Document](./README.md)
## Introduction
Leaf refers to some common ID generation schemes in the industry, including redis, UUID, snowflake, etc.
Each of the above approaches has its own problems, so we decided to implement a set of distributed ID generation services to meet the requirements.
At present, Leaf covers Meituan review company's internal finance, catering, takeaway, hotel travel, cat's eye movie and many other business lines. On the basis of 4C8G VM, through the company RPC method, QPS pressure test results are nearly 5w/s, TP999 1ms.
You can use it to encapsulate a distributed unique id distribution center in a service-oriented SOA architecture as the id distribution provider for all applications
## Quick Start
### Leaf Server
Leaf provides an HTTP service based on spring boot to get the id
#### run Leaf Server
##### build
```shell
git clone git@github.com:Meituan-Dianping/Leaf.git
cd leaf
mvn clean install -DskipTests
cd leaf-server
```
##### run
###### maven
```shell
mvn spring-boot:run
```
or
###### shell command
```shell
sh deploy/run.sh
```
##### test
```shell
#segment
curl http://localhost:8080/api/segment/get/leaf-segment-test
#snowflake
curl http://localhost:8080/api/snowflake/get/test
```
#### Configuration
Leaf provides two ways to generate ids (segment mode and snowflake mode), which you can turn on at the same time or specify one way to turn on (both are off by default).
Leaf Server configuration is in the leaf-server/src/main/resources/leaf.properties
| configuration | meaning | default |
| ------------------------- | ----------------------------- | ------ |
| leaf.name | leaf service name | |
| leaf.segment.enable | whether segment mode is enabled | false |
| leaf.jdbc.url | mysql url | |
| leaf.jdbc.username | mysql username | |
| leaf.jdbc.password | mysql password | |
| leaf.snowflake.enable | whether snowflake mode is enabled | false |
| leaf.snowflake.zk.address | zk address under snowflake mode | |
| leaf.snowflake.port | service registration port under snowflake mode | |
### Segment mode
In order to use segment mode, you need to create DB table first, and configure leaf.jdbc.url, leaf.jdbc.username, leaf.jdbc.password
If you do not want use it, just configure leaf.segment.enable=false to disable it.
```sql
CREATE DATABASE leaf
CREATE TABLE `leaf_alloc` (
`biz_tag` varchar(128) NOT NULL DEFAULT '', -- your biz unique name
`max_id` bigint(20) NOT NULL DEFAULT '1',
`step` int(11) NOT NULL,
`description` varchar(256) DEFAULT NULL,
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`biz_tag`)
) ENGINE=InnoDB;
insert into leaf_alloc(biz_tag, max_id, step, description) values('leaf-segment-test', 1, 2000, 'Test leaf Segment Mode Get Id')
```
### Snowflake mode
The algorithm is taken from twitter's open-source snowflake algorithm.
If you do not want to use it, just configure leaf.snowflake.enable=false to disable it.
Configure the zookeeper address
```
leaf.snowflake.zk.address=${address}
leaf.snowflake.enable=true
leaf.snowflake.port=${port}
```
configure leaf.snowflake.zk.address in the leaf.properties, and configure the leaf service listen port leaf.snowflake.port.
### monitor page
segment mode: http://localhost:8080/cache
### Leaf Core
Of course, in order to pursue higher performance, you need to deploy the Leaf service through RPC Server, which only needs to introduce the leaf-core package and encapsulate the API that generates the ID into the specified RPC framework.
#### Attention
Note that leaf's current IP acquisition logic in the case of snowflake mode takes the first network card IP directly (especially for services that change IP) to avoid wasting the workId
# Leaf
> There are no two identical leaves in the world.
>
> 世界上没有两片完全相同的树叶。
>
> ​ — 莱布尼茨
[中文文档](./README_CN.md) | [English Document](./README.md)
## Introduction
Leaf 最早期需求是各个业务线的订单ID生成需求。在美团早期,有的业务直接通过DB自增的方式生成ID,有的业务通过redis缓存来生成ID,也有的业务直接用UUID这种方式来生成ID。以上的方式各自有各自的问题,因此我们决定实现一套分布式ID生成服务来满足需求。具体Leaf 设计文档见:[ leaf 美团分布式ID生成服务 ](https://tech.meituan.com/MT_Leaf.html )
目前Leaf覆盖了美团点评公司内部金融、餐饮、外卖、酒店旅游、猫眼电影等众多业务线。在4C8G VM基础上,通过公司RPC方式调用,QPS压测结果近5w/s,TP999 1ms。
## Quick Start
### Leaf Server
我们提供了一个基于spring boot的HTTP服务来获取ID
#### 配置介绍
Leaf 提供两种生成的ID的方式(号段模式和snowflake模式),你可以同时开启两种方式,也可以指定开启某种方式(默认两种方式为关闭状态)。
Leaf Server的配置都在leaf-server/src/main/resources/leaf.properties中
| 配置项 | 含义 | 默认值 |
| ------------------------- | ----------------------------- | ------ |
| leaf.name | leaf 服务名 | |
| leaf.segment.enable | 是否开启号段模式 | false |
| leaf.jdbc.url | mysql 库地址 | |
| leaf.jdbc.username | mysql 用户名 | |
| leaf.jdbc.password | mysql 密码 | |
| leaf.snowflake.enable | 是否开启snowflake模式 | false |
| leaf.snowflake.zk.address | snowflake模式下的zk地址 | |
| leaf.snowflake.port | snowflake模式下的服务注册端口 | |
#### 号段模式
如果使用号段模式,需要建立DB表,并配置leaf.jdbc.url, leaf.jdbc.username, leaf.jdbc.password
如果不想使用该模式配置leaf.segment.enable=false即可。
##### 创建数据表
```sql
CREATE DATABASE leaf
CREATE TABLE `leaf_alloc` (
`biz_tag` varchar(128) NOT NULL DEFAULT '',
`max_id` bigint(20) NOT NULL DEFAULT '1',
`step` int(11) NOT NULL,
`description` varchar(256) DEFAULT NULL,
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`biz_tag`)
) ENGINE=InnoDB;
insert into leaf_alloc(biz_tag, max_id, step, description) values('leaf-segment-test', 1, 2000, 'Test leaf Segment Mode Get Id')
```
##### 配置相关数据项
在leaf.properties中配置leaf.jdbc.url, leaf.jdbc.username, leaf.jdbc.password参数
#### Snowflake模式
算法取自twitter开源的snowflake算法。
如果不想使用该模式配置leaf.snowflake.enable=false即可。
##### 配置zookeeper地址
在leaf.properties中配置leaf.snowflake.zk.address,配置leaf 服务监听的端口leaf.snowflake.port。
#### 运行Leaf Server
##### 打包服务
```shell
git clone git@github.com:Meituan-Dianping/Leaf.git
//按照上面的号段模式在工程里面配置好
cd leaf
mvn clean install -DskipTests
cd leaf-server
```
##### 运行服务
*注意:首先得先配置好数据库表或者zk地址*
###### mvn方式
```shell
mvn spring-boot:run
```
###### 脚本方式
```shell
sh deploy/run.sh
```
##### 测试
```shell
#segment
curl http://localhost:8080/api/segment/get/leaf-segment-test
#snowflake
curl http://localhost:8080/api/snowflake/get/test
```
##### 监控页面
号段模式:http://localhost:8080/cache
### Leaf Core
当然,为了追求更高的性能,需要通过RPC Server来部署Leaf 服务,那仅需要引入leaf-core的包,把生成ID的API封装到指定的RPC框架中即可。
### 注意事项
注意现在leaf使用snowflake模式的情况下 其获取ip的逻辑直接取首个网卡ip【特别对于会更换ip的服务要注意】避免浪费workId
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.sankuai.inf.leaf</groupId>
<artifactId>leaf-parent</artifactId>
<version>1.0.1</version>
</parent>
<groupId>com.sankuai.inf.leaf</groupId>
<artifactId>leaf-core</artifactId>
<packaging>jar</packaging>
<version>1.0.1</version>
<name>leaf-core</name>
<properties>
<mysql-connector-java.version>5.1.38</mysql-connector-java.version>
<commons-io.version>2.4</commons-io.version>
<log4j.version>2.7</log4j.version>
<mybatis-spring.version>1.2.5</mybatis-spring.version>
</properties>
<dependencies>
<dependency>
<groupId>org.mybatis</groupId>
<artifactId>mybatis</artifactId>
</dependency>
<dependency>
<groupId>org.perf4j</groupId>
<artifactId>perf4j</artifactId>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
</dependency>
<!--zk-->
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson-databind.version}</version>
<scope>provided</scope>
</dependency>
<!-- test scope -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.mybatis</groupId>
<artifactId>mybatis-spring</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
</project>
package com.sankuai.inf.leaf;
import com.sankuai.inf.leaf.common.Result;
public interface IDGen {
Result get(String key);
boolean init();
}
package com.sankuai.inf.leaf.common;
public class CheckVO {
private long timestamp;
private int workID;
public CheckVO(long timestamp, int workID) {
this.timestamp = timestamp;
this.workID = workID;
}
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
public int getWorkID() {
return workID;
}
public void setWorkID(int workID) {
this.workID = workID;
}
}
package com.sankuai.inf.leaf.common;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Properties;
public class PropertyFactory {
private static final Logger logger = LoggerFactory.getLogger(PropertyFactory.class);
private static final Properties prop = new Properties();
static {
try {
prop.load(PropertyFactory.class.getClassLoader().getResourceAsStream("leaf.properties"));
} catch (Exception e) {
logger.warn("Load leaf.properties Properties Exception");
}
}
public static Properties getProperties() {
return prop;
}
}
package com.sankuai.inf.leaf.common;
public class Result {
private long id;
private Status status;
public Result() {
}
public Result(long id, Status status) {
this.id = id;
this.status = status;
}
public long getId() {
return id;
}
public void setId(long id) {
this.id = id;
}
public Status getStatus() {
return status;
}
public void setStatus(Status status) {
this.status = status;
}
@Override
public String toString() {
final StringBuilder sb = new StringBuilder("Result{");
sb.append("id=").append(id);
sb.append(", status=").append(status);
sb.append('}');
return sb.toString();
}
}
package com.sankuai.inf.leaf.common;
public enum Status {
SUCCESS,
EXCEPTION
}
package com.sankuai.inf.leaf.common;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.net.Inet6Address;
import java.net.InetAddress;
import java.net.NetworkInterface;
import java.net.SocketException;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.List;
public class Utils {
private static final Logger logger = LoggerFactory.getLogger(Utils.class);
public static String getIp() {
String ip;
try {
List<String> ipList = getHostAddress(null);
// default the first
ip = (!ipList.isEmpty()) ? ipList.get(0) : "";
} catch (Exception ex) {
ip = "";
logger.warn("Utils get IP warn", ex);
}
return ip;
}
public static String getIp(String interfaceName) {
String ip;
interfaceName = interfaceName.trim();
try {
List<String> ipList = getHostAddress(interfaceName);
ip = (!ipList.isEmpty()) ? ipList.get(0) : "";
} catch (Exception ex) {
ip = "";
logger.warn("Utils get IP warn", ex);
}
return ip;
}
/**
* 获取已激活网卡的IP地址
*
* @param interfaceName 可指定网卡名称,null则获取全部
* @return List<String>
*/
private static List<String> getHostAddress(String interfaceName) throws SocketException {
List<String> ipList = new ArrayList<String>(5);
Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces();
while (interfaces.hasMoreElements()) {
NetworkInterface ni = interfaces.nextElement();
Enumeration<InetAddress> allAddress = ni.getInetAddresses();
while (allAddress.hasMoreElements()) {
InetAddress address = allAddress.nextElement();
if (address.isLoopbackAddress()) {
// skip the loopback addr
continue;
}
if (address instanceof Inet6Address) {
// skip the IPv6 addr
continue;
}
String hostAddress = address.getHostAddress();
if (null == interfaceName) {
ipList.add(hostAddress);
} else if (interfaceName.equals(ni.getDisplayName())) {
ipList.add(hostAddress);
}
}
}
return ipList;
}
}
package com.sankuai.inf.leaf.common;
import com.sankuai.inf.leaf.IDGen;
public class ZeroIDGen implements IDGen {
@Override
public Result get(String key) {
return new Result(0, Status.SUCCESS);
}
@Override
public boolean init() {
return true;
}
}
package com.sankuai.inf.leaf.segment;
import com.sankuai.inf.leaf.IDGen;
import com.sankuai.inf.leaf.common.Result;
import com.sankuai.inf.leaf.common.Status;
import com.sankuai.inf.leaf.segment.dao.IDAllocDao;
import com.sankuai.inf.leaf.segment.model.*;
import org.perf4j.StopWatch;
import org.perf4j.slf4j.Slf4JStopWatch;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.*;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
public class SegmentIDGenImpl implements IDGen {
private static final Logger logger = LoggerFactory.getLogger(SegmentIDGenImpl.class);
/**
* IDCache未初始化成功时的异常码
*/
private static final long EXCEPTION_ID_IDCACHE_INIT_FALSE = -1;
/**
* key不存在时的异常码
*/
private static final long EXCEPTION_ID_KEY_NOT_EXISTS = -2;
/**
* SegmentBuffer中的两个Segment均未从DB中装载时的异常码
*/
private static final long EXCEPTION_ID_TWO_SEGMENTS_ARE_NULL = -3;
/**
* 最大步长不超过100,0000
*/
private static final int MAX_STEP = 1000000;
/**
* 一个Segment维持时间为15分钟
*/
private static final long SEGMENT_DURATION = 15 * 60 * 1000L;
private ExecutorService service = new ThreadPoolExecutor(5, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>(), new UpdateThreadFactory());
private volatile boolean initOK = false;
private Map<String, SegmentBuffer> cache = new ConcurrentHashMap<String, SegmentBuffer>();
private IDAllocDao dao;
public static class UpdateThreadFactory implements ThreadFactory {
private static int threadInitNumber = 0;
private static synchronized int nextThreadNum() {
return threadInitNumber++;
}
@Override
public Thread newThread(Runnable r) {
return new Thread(r, "Thread-Segment-Update-" + nextThreadNum());
}
}
@Override
public boolean init() {
logger.info("Init ...");
// 确保加载到kv后才初始化成功
updateCacheFromDb();
initOK = true;
updateCacheFromDbAtEveryMinute();
return initOK;
}
private void updateCacheFromDbAtEveryMinute() {
ScheduledExecutorService service = Executors.newSingleThreadScheduledExecutor(new ThreadFactory() {
@Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setName("check-idCache-thread");
t.setDaemon(true);
return t;
}
});
service.scheduleWithFixedDelay(new Runnable() {
@Override
public void run() {
updateCacheFromDb();
}
}, 60, 60, TimeUnit.SECONDS);
}
private void updateCacheFromDb() {
logger.info("update cache from db");
StopWatch sw = new Slf4JStopWatch();
try {
List<String> dbTags = dao.getAllTags();
if (dbTags == null || dbTags.isEmpty()) {
return;
}
List<String> cacheTags = new ArrayList<String>(cache.keySet());
List<String> insertTags = new ArrayList<String>(dbTags);
List<String> removeTags = new ArrayList<String>(cacheTags);
//db中新加的tags灌进cache
insertTags.removeAll(cacheTags);
for (String tag : insertTags) {
SegmentBuffer buffer = new SegmentBuffer();
buffer.setKey(tag);
Segment segment = buffer.getCurrent();
segment.setValue(new AtomicLong(0));
segment.setMax(0);
segment.setStep(0);
cache.put(tag, buffer);
logger.info("Add tag {} from db to IdCache, SegmentBuffer {}", tag, buffer);
}
//cache中已失效的tags从cache删除
removeTags.removeAll(dbTags);
for (String tag : removeTags) {
cache.remove(tag);
logger.info("Remove tag {} from IdCache", tag);
}
} catch (Exception e) {
logger.warn("update cache from db exception", e);
} finally {
sw.stop("updateCacheFromDb");
}
}
@Override
public Result get(final String key) {
if (!initOK) {
return new Result(EXCEPTION_ID_IDCACHE_INIT_FALSE, Status.EXCEPTION);
}
if (cache.containsKey(key)) {
SegmentBuffer buffer = cache.get(key);
if (!buffer.isInitOk()) {
synchronized (buffer) {
if (!buffer.isInitOk()) {
try {
updateSegmentFromDb(key, buffer.getCurrent());
logger.info("Init buffer. Update leafkey {} {} from db", key, buffer.getCurrent());
buffer.setInitOk(true);
} catch (Exception e) {
logger.warn("Init buffer {} exception", buffer.getCurrent(), e);
}
}
}
}
return getIdFromSegmentBuffer(cache.get(key));
}
return new Result(EXCEPTION_ID_KEY_NOT_EXISTS, Status.EXCEPTION);
}
public void updateSegmentFromDb(String key, Segment segment) {
StopWatch sw = new Slf4JStopWatch();
SegmentBuffer buffer = segment.getBuffer();
LeafAlloc leafAlloc;
if (!buffer.isInitOk()) {
leafAlloc = dao.updateMaxIdAndGetLeafAlloc(key);
buffer.setStep(leafAlloc.getStep());
buffer.setMinStep(leafAlloc.getStep());//leafAlloc中的step为DB中的step
} else if (buffer.getUpdateTimestamp() == 0) {
leafAlloc = dao.updateMaxIdAndGetLeafAlloc(key);
buffer.setUpdateTimestamp(System.currentTimeMillis());
buffer.setStep(leafAlloc.getStep());
buffer.setMinStep(leafAlloc.getStep());//leafAlloc中的step为DB中的step
} else {
long duration = System.currentTimeMillis() - buffer.getUpdateTimestamp();
int nextStep = buffer.getStep();
if (duration < SEGMENT_DURATION) {
if (nextStep * 2 > MAX_STEP) {
//do nothing
} else {
nextStep = nextStep * 2;
}
} else if (duration < SEGMENT_DURATION * 2) {
//do nothing with nextStep
} else {
nextStep = nextStep / 2 >= buffer.getMinStep() ? nextStep / 2 : nextStep;
}
logger.info("leafKey[{}], step[{}], duration[{}mins], nextStep[{}]", key, buffer.getStep(), String.format("%.2f",((double)duration / (1000 * 60))), nextStep);
LeafAlloc temp = new LeafAlloc();
temp.setKey(key);
temp.setStep(nextStep);
leafAlloc = dao.updateMaxIdByCustomStepAndGetLeafAlloc(temp);
buffer.setUpdateTimestamp(System.currentTimeMillis());
buffer.setStep(nextStep);
buffer.setMinStep(leafAlloc.getStep());//leafAlloc的step为DB中的step
}
// must set value before set max
long value = leafAlloc.getMaxId() - buffer.getStep();
segment.getValue().set(value);
segment.setMax(leafAlloc.getMaxId());
segment.setStep(buffer.getStep());
sw.stop("updateSegmentFromDb", key + " " + segment);
}
public Result getIdFromSegmentBuffer(final SegmentBuffer buffer) {
while (true) {
buffer.rLock().lock();
try {
final Segment segment = buffer.getCurrent();
if (!buffer.isNextReady() && (segment.getIdle() < 0.9 * segment.getStep()) && buffer.getThreadRunning().compareAndSet(false, true)) {
service.execute(new Runnable() {
@Override
public void run() {
Segment next = buffer.getSegments()[buffer.nextPos()];
boolean updateOk = false;
try {
updateSegmentFromDb(buffer.getKey(), next);
updateOk = true;
logger.info("update segment {} from db {}", buffer.getKey(), next);
} catch (Exception e) {
logger.warn(buffer.getKey() + " updateSegmentFromDb exception", e);
} finally {
if (updateOk) {
buffer.wLock().lock();
buffer.setNextReady(true);
buffer.getThreadRunning().set(false);
buffer.wLock().unlock();
} else {
buffer.getThreadRunning().set(false);
}
}
}
});
}
long value = segment.getValue().getAndIncrement();
if (value < segment.getMax()) {
return new Result(value, Status.SUCCESS);
}
} finally {
buffer.rLock().unlock();
}
waitAndSleep(buffer);
buffer.wLock().lock();
try {
final Segment segment = buffer.getCurrent();
long value = segment.getValue().getAndIncrement();
if (value < segment.getMax()) {
return new Result(value, Status.SUCCESS);
}
if (buffer.isNextReady()) {
buffer.switchPos();
buffer.setNextReady(false);
} else {
logger.error("Both two segments in {} are not ready!", buffer);
return new Result(EXCEPTION_ID_TWO_SEGMENTS_ARE_NULL, Status.EXCEPTION);
}
} finally {
buffer.wLock().unlock();
}
}
}
private void waitAndSleep(SegmentBuffer buffer) {
int roll = 0;
while (buffer.getThreadRunning().get()) {
roll += 1;
if(roll > 10000) {
try {
TimeUnit.MILLISECONDS.sleep(10);
break;
} catch (InterruptedException e) {
logger.warn("Thread {} Interrupted",Thread.currentThread().getName());
break;
}
}
}
}
public List<LeafAlloc> getAllLeafAllocs() {
return dao.getAllLeafAllocs();
}
public Map<String, SegmentBuffer> getCache() {
return cache;
}
public IDAllocDao getDao() {
return dao;
}
public void setDao(IDAllocDao dao) {
this.dao = dao;
}
}
package com.sankuai.inf.leaf.segment.dao;
import com.sankuai.inf.leaf.segment.model.LeafAlloc;
import java.util.List;
public interface IDAllocDao {
List<LeafAlloc> getAllLeafAllocs();
LeafAlloc updateMaxIdAndGetLeafAlloc(String tag);
LeafAlloc updateMaxIdByCustomStepAndGetLeafAlloc(LeafAlloc leafAlloc);
List<String> getAllTags();
}
package com.sankuai.inf.leaf.segment.dao;
import com.sankuai.inf.leaf.segment.model.LeafAlloc;
import org.apache.ibatis.annotations.*;
import java.util.List;
public interface IDAllocMapper {
@Select("SELECT biz_tag, max_id, step, update_time FROM leaf_alloc")
@Results(value = {
@Result(column = "biz_tag", property = "key"),
@Result(column = "max_id", property = "maxId"),
@Result(column = "step", property = "step"),
@Result(column = "update_time", property = "updateTime")
})
List<LeafAlloc> getAllLeafAllocs();
@Select("SELECT biz_tag, max_id, step FROM leaf_alloc WHERE biz_tag = #{tag}")
@Results(value = {
@Result(column = "biz_tag", property = "key"),
@Result(column = "max_id", property = "maxId"),
@Result(column = "step", property = "step")
})
LeafAlloc getLeafAlloc(@Param("tag") String tag);
@Update("UPDATE leaf_alloc SET max_id = max_id + step WHERE biz_tag = #{tag}")
void updateMaxId(@Param("tag") String tag);
@Update("UPDATE leaf_alloc SET max_id = max_id + #{step} WHERE biz_tag = #{key}")
void updateMaxIdByCustomStep(@Param("leafAlloc") LeafAlloc leafAlloc);
@Select("SELECT biz_tag FROM leaf_alloc")
List<String> getAllTags();
}
package com.sankuai.inf.leaf.segment.dao.impl;
import com.sankuai.inf.leaf.segment.dao.IDAllocDao;
import com.sankuai.inf.leaf.segment.dao.IDAllocMapper;
import com.sankuai.inf.leaf.segment.model.LeafAlloc;
import org.apache.ibatis.mapping.Environment;
import org.apache.ibatis.session.Configuration;
import org.apache.ibatis.session.SqlSession;
import org.apache.ibatis.session.SqlSessionFactory;
import org.apache.ibatis.session.SqlSessionFactoryBuilder;
import org.apache.ibatis.transaction.TransactionFactory;
import org.apache.ibatis.transaction.jdbc.JdbcTransactionFactory;
import javax.sql.DataSource;
import java.util.List;
public class IDAllocDaoImpl implements IDAllocDao {
SqlSessionFactory sqlSessionFactory;
public IDAllocDaoImpl(DataSource dataSource) {
TransactionFactory transactionFactory = new JdbcTransactionFactory();
Environment environment = new Environment("development", transactionFactory, dataSource);
Configuration configuration = new Configuration(environment);
configuration.addMapper(IDAllocMapper.class);
sqlSessionFactory = new SqlSessionFactoryBuilder().build(configuration);
}
@Override
public List<LeafAlloc> getAllLeafAllocs() {
SqlSession sqlSession = sqlSessionFactory.openSession(false);
try {
return sqlSession.selectList("com.sankuai.inf.leaf.segment.dao.IDAllocMapper.getAllLeafAllocs");
} finally {
sqlSession.close();
}
}
@Override
public LeafAlloc updateMaxIdAndGetLeafAlloc(String tag) {
SqlSession sqlSession = sqlSessionFactory.openSession();
try {
sqlSession.update("com.sankuai.inf.leaf.segment.dao.IDAllocMapper.updateMaxId", tag);
LeafAlloc result = sqlSession.selectOne("com.sankuai.inf.leaf.segment.dao.IDAllocMapper.getLeafAlloc", tag);
sqlSession.commit();
return result;
} finally {
sqlSession.close();
}
}
@Override
public LeafAlloc updateMaxIdByCustomStepAndGetLeafAlloc(LeafAlloc leafAlloc) {
SqlSession sqlSession = sqlSessionFactory.openSession();
try {
sqlSession.update("com.sankuai.inf.leaf.segment.dao.IDAllocMapper.updateMaxIdByCustomStep", leafAlloc);
LeafAlloc result = sqlSession.selectOne("com.sankuai.inf.leaf.segment.dao.IDAllocMapper.getLeafAlloc", leafAlloc.getKey());
sqlSession.commit();
return result;
} finally {
sqlSession.close();
}
}
@Override
public List<String> getAllTags() {
SqlSession sqlSession = sqlSessionFactory.openSession(false);
try {
return sqlSession.selectList("com.sankuai.inf.leaf.segment.dao.IDAllocMapper.getAllTags");
} finally {
sqlSession.close();
}
}
}
package com.sankuai.inf.leaf.segment.model;
public class LeafAlloc {
private String key;
private long maxId;
private int step;
private String updateTime;
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public long getMaxId() {
return maxId;
}
public void setMaxId(long maxId) {
this.maxId = maxId;
}
public int getStep() {
return step;
}
public void setStep(int step) {
this.step = step;
}
public String getUpdateTime() {
return updateTime;
}
public void setUpdateTime(String updateTime) {
this.updateTime = updateTime;
}
}
package com.sankuai.inf.leaf.segment.model;
import java.util.concurrent.atomic.AtomicLong;
public class Segment {
private AtomicLong value = new AtomicLong(0);
private volatile long max;
private volatile int step;
private SegmentBuffer buffer;
public Segment(SegmentBuffer buffer) {
this.buffer = buffer;
}
public AtomicLong getValue() {
return value;
}
public void setValue(AtomicLong value) {
this.value = value;
}
public long getMax() {
return max;
}
public void setMax(long max) {
this.max = max;
}
public int getStep() {
return step;
}
public void setStep(int step) {
this.step = step;
}
public SegmentBuffer getBuffer() {
return buffer;
}
public long getIdle() {
return this.getMax() - getValue().get();
}
@Override
public String toString() {
StringBuilder sb = new StringBuilder("Segment(");
sb.append("value:");
sb.append(value);
sb.append(",max:");
sb.append(max);
sb.append(",step:");
sb.append(step);
sb.append(")");
return sb.toString();
}
}
package com.sankuai.inf.leaf.segment.model;
import java.util.Arrays;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
/**
* 双buffer
*/
public class SegmentBuffer {
private String key;
private Segment[] segments; //双buffer
private volatile int currentPos; //当前的使用的segment的index
private volatile boolean nextReady; //下一个segment是否处于可切换状态
private volatile boolean initOk; //是否初始化完成
private final AtomicBoolean threadRunning; //线程是否在运行中
private final ReadWriteLock lock;
private volatile int step;
private volatile int minStep;
private volatile long updateTimestamp;
public SegmentBuffer() {
segments = new Segment[]{new Segment(this), new Segment(this)};
currentPos = 0;
nextReady = false;
initOk = false;
threadRunning = new AtomicBoolean(false);
lock = new ReentrantReadWriteLock();
}
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public Segment[] getSegments() {
return segments;
}
public Segment getCurrent() {
return segments[currentPos];
}
public int getCurrentPos() {
return currentPos;
}
public int nextPos() {
return (currentPos + 1) % 2;
}
public void switchPos() {
currentPos = nextPos();
}
public boolean isInitOk() {
return initOk;
}
public void setInitOk(boolean initOk) {
this.initOk = initOk;
}
public boolean isNextReady() {
return nextReady;
}
public void setNextReady(boolean nextReady) {
this.nextReady = nextReady;
}
public AtomicBoolean getThreadRunning() {
return threadRunning;
}
public Lock rLock() {
return lock.readLock();
}
public Lock wLock() {
return lock.writeLock();
}
public int getStep() {
return step;
}
public void setStep(int step) {
this.step = step;
}
public int getMinStep() {
return minStep;
}
public void setMinStep(int minStep) {
this.minStep = minStep;
}
public long getUpdateTimestamp() {
return updateTimestamp;
}
public void setUpdateTimestamp(long updateTimestamp) {
this.updateTimestamp = updateTimestamp;
}
@Override
public String toString() {
final StringBuilder sb = new StringBuilder("SegmentBuffer{");
sb.append("key='").append(key).append('\'');
sb.append(", segments=").append(Arrays.toString(segments));
sb.append(", currentPos=").append(currentPos);
sb.append(", nextReady=").append(nextReady);
sb.append(", initOk=").append(initOk);
sb.append(", threadRunning=").append(threadRunning);
sb.append(", step=").append(step);
sb.append(", minStep=").append(minStep);
sb.append(", updateTimestamp=").append(updateTimestamp);
sb.append('}');
return sb.toString();
}
}
package com.sankuai.inf.leaf.snowflake;
import com.google.common.base.Preconditions;
import com.sankuai.inf.leaf.IDGen;
import com.sankuai.inf.leaf.common.Result;
import com.sankuai.inf.leaf.common.Status;
import com.sankuai.inf.leaf.common.Utils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.Random;
public class SnowflakeIDGenImpl implements IDGen {
@Override
public boolean init() {
return true;
}
private static final Logger LOGGER = LoggerFactory.getLogger(SnowflakeIDGenImpl.class);
private final long twepoch;
private final long workerIdBits = 10L;
private final long maxWorkerId = ~(-1L << workerIdBits);//最大能够分配的workerid =1023
private final long sequenceBits = 12L;
private final long workerIdShift = sequenceBits;
private final long timestampLeftShift = sequenceBits + workerIdBits;
private final long sequenceMask = ~(-1L << sequenceBits);
private long workerId;
private long sequence = 0L;
private long lastTimestamp = -1L;
private static final Random RANDOM = new Random();
public SnowflakeIDGenImpl(String zkAddress, int port) {
//Thu Nov 04 2010 09:42:54 GMT+0800 (中国标准时间)
this(zkAddress, port, 1288834974657L);
}
/**
* @param zkAddress zk地址
* @param port snowflake监听端口
* @param twepoch 起始的时间戳
*/
public SnowflakeIDGenImpl(String zkAddress, int port, long twepoch) {
this.twepoch = twepoch;
Preconditions.checkArgument(timeGen() > twepoch, "Snowflake not support twepoch gt currentTime");
final String ip = Utils.getIp();
SnowflakeZookeeperHolder holder = new SnowflakeZookeeperHolder(ip, String.valueOf(port), zkAddress);
LOGGER.info("twepoch:{} ,ip:{} ,zkAddress:{} port:{}", twepoch, ip, zkAddress, port);
boolean initFlag = holder.init();
if (initFlag) {
workerId = holder.getWorkerID();
LOGGER.info("START SUCCESS USE ZK WORKERID-{}", workerId);
} else {
Preconditions.checkArgument(initFlag, "Snowflake Id Gen is not init ok");
}
Preconditions.checkArgument(workerId >= 0 && workerId <= maxWorkerId, "workerID must gte 0 and lte 1023");
}
@Override
public synchronized Result get(String key) {
long timestamp = timeGen();
if (timestamp < lastTimestamp) {
long offset = lastTimestamp - timestamp;
if (offset <= 5) {
try {
wait(offset << 1);
timestamp = timeGen();
if (timestamp < lastTimestamp) {
return new Result(-1, Status.EXCEPTION);
}
} catch (InterruptedException e) {
LOGGER.error("wait interrupted");
return new Result(-2, Status.EXCEPTION);
}
} else {
return new Result(-3, Status.EXCEPTION);
}
}
if (lastTimestamp == timestamp) {
sequence = (sequence + 1) & sequenceMask;
if (sequence == 0) {
//seq 为0的时候表示是下一毫秒时间开始对seq做随机
sequence = RANDOM.nextInt(100);
timestamp = tilNextMillis(lastTimestamp);
}
} else {
//如果是新的ms开始
sequence = RANDOM.nextInt(100);
}
lastTimestamp = timestamp;
long id = ((timestamp - twepoch) << timestampLeftShift) | (workerId << workerIdShift) | sequence;
return new Result(id, Status.SUCCESS);
}
protected long tilNextMillis(long lastTimestamp) {
long timestamp = timeGen();
while (timestamp <= lastTimestamp) {
timestamp = timeGen();
}
return timestamp;
}
protected long timeGen() {
return System.currentTimeMillis();
}
public long getWorkerId() {
return workerId;
}
}
package com.sankuai.inf.leaf.snowflake;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.sankuai.inf.leaf.snowflake.exception.CheckLastTimeException;
import org.apache.commons.io.FileUtils;
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.retry.RetryUntilElapsed;
import org.apache.zookeeper.data.Stat;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.common.collect.Maps;
import com.sankuai.inf.leaf.common.*;
import org.apache.curator.RetryPolicy;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.zookeeper.CreateMode;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadFactory;
import java.util.concurrent.TimeUnit;
public class SnowflakeZookeeperHolder {
private static final Logger LOGGER = LoggerFactory.getLogger(SnowflakeZookeeperHolder.class);
private String zk_AddressNode = null;//保存自身的key ip:port-000000001
private String listenAddress = null;//保存自身的key ip:port
private int workerID;
private static final String PREFIX_ZK_PATH = "/snowflake/" + PropertyFactory.getProperties().getProperty("leaf.name");
private static final String PROP_PATH = System.getProperty("java.io.tmpdir") + File.separator + PropertyFactory.getProperties().getProperty("leaf.name") + "/leafconf/{port}/workerID.properties";
private static final String PATH_FOREVER = PREFIX_ZK_PATH + "/forever";//保存所有数据持久的节点
private String ip;
private String port;
private String connectionString;
private long lastUpdateTime;
public SnowflakeZookeeperHolder(String ip, String port, String connectionString) {
this.ip = ip;
this.port = port;
this.listenAddress = ip + ":" + port;
this.connectionString = connectionString;
}
public boolean init() {
try {
CuratorFramework curator = createWithOptions(connectionString, new RetryUntilElapsed(1000, 4), 10000, 6000);
curator.start();
Stat stat = curator.checkExists().forPath(PATH_FOREVER);
if (stat == null) {
//不存在根节点,机器第一次启动,创建/snowflake/ip:port-000000000,并上传数据
zk_AddressNode = createNode(curator);
//worker id 默认是0
updateLocalWorkerID(workerID);
//定时上报本机时间给forever节点
ScheduledUploadData(curator, zk_AddressNode);
return true;
} else {
Map<String, Integer> nodeMap = Maps.newHashMap();//ip:port->00001
Map<String, String> realNode = Maps.newHashMap();//ip:port->(ipport-000001)
//存在根节点,先检查是否有属于自己的根节点
List<String> keys = curator.getChildren().forPath(PATH_FOREVER);
for (String key : keys) {
String[] nodeKey = key.split("-");
realNode.put(nodeKey[0], key);
nodeMap.put(nodeKey[0], Integer.parseInt(nodeKey[1]));
}
Integer workerid = nodeMap.get(listenAddress);
if (workerid != null) {
//有自己的节点,zk_AddressNode=ip:port
zk_AddressNode = PATH_FOREVER + "/" + realNode.get(listenAddress);
workerID = workerid;//启动worder时使用会使用
if (!checkInitTimeStamp(curator, zk_AddressNode)) {
throw new CheckLastTimeException("init timestamp check error,forever node timestamp gt this node time");
}
//准备创建临时节点
doService(curator);
updateLocalWorkerID(workerID);
LOGGER.info("[Old NODE]find forever node have this endpoint ip-{} port-{} workid-{} childnode and start SUCCESS", ip, port, workerID);
} else {
//表示新启动的节点,创建持久节点 ,不用check时间
String newNode = createNode(curator);
zk_AddressNode = newNode;
String[] nodeKey = newNode.split("-");
workerID = Integer.parseInt(nodeKey[1]);
doService(curator);
updateLocalWorkerID(workerID);
LOGGER.info("[New NODE]can not find node on forever node that endpoint ip-{} port-{} workid-{},create own node on forever node and start SUCCESS ", ip, port, workerID);
}
}
} catch (Exception e) {
LOGGER.error("Start node ERROR {}", e);
try {
Properties properties = new Properties();
properties.load(new FileInputStream(new File(PROP_PATH.replace("{port}", port + ""))));
workerID = Integer.valueOf(properties.getProperty("workerID"));
LOGGER.warn("START FAILED ,use local node file properties workerID-{}", workerID);
} catch (Exception e1) {
LOGGER.error("Read file error ", e1);
return false;
}
}
return true;
}
private void doService(CuratorFramework curator) {
ScheduledUploadData(curator, zk_AddressNode);// /snowflake_forever/ip:port-000000001
}
private void ScheduledUploadData(final CuratorFramework curator, final String zk_AddressNode) {
Executors.newSingleThreadScheduledExecutor(new ThreadFactory() {
@Override
public Thread newThread(Runnable r) {
Thread thread = new Thread(r, "schedule-upload-time");
thread.setDaemon(true);
return thread;
}
}).scheduleWithFixedDelay(new Runnable() {
@Override
public void run() {
updateNewData(curator, zk_AddressNode);
}
}, 1L, 3L, TimeUnit.SECONDS);//每3s上报数据
}
private boolean checkInitTimeStamp(CuratorFramework curator, String zk_AddressNode) throws Exception {
byte[] bytes = curator.getData().forPath(zk_AddressNode);
Endpoint endPoint = deBuildData(new String(bytes));
//该节点的时间不能小于最后一次上报的时间
return !(endPoint.getTimestamp() > System.currentTimeMillis());
}
/**
* 创建持久顺序节点 ,并把节点数据放入 value
*
* @param curator
* @return
* @throws Exception
*/
private String createNode(CuratorFramework curator) throws Exception {
try {
return curator.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT_SEQUENTIAL).forPath(PATH_FOREVER + "/" + listenAddress + "-", buildData().getBytes());
} catch (Exception e) {
LOGGER.error("create node error msg {} ", e.getMessage());
throw e;
}
}
private void updateNewData(CuratorFramework curator, String path) {
try {
if (System.currentTimeMillis() < lastUpdateTime) {
return;
}
curator.setData().forPath(path, buildData().getBytes());
lastUpdateTime = System.currentTimeMillis();
} catch (Exception e) {
LOGGER.info("update init data error path is {} error is {}", path, e);
}
}
/**
* 构建需要上传的数据
*
* @return
*/
private String buildData() throws JsonProcessingException {
Endpoint endpoint = new Endpoint(ip, port, System.currentTimeMillis());
ObjectMapper mapper = new ObjectMapper();
String json = mapper.writeValueAsString(endpoint);
return json;
}
private Endpoint deBuildData(String json) throws IOException {
ObjectMapper mapper = new ObjectMapper();
Endpoint endpoint = mapper.readValue(json, Endpoint.class);
return endpoint;
}
/**
* 在节点文件系统上缓存一个workid值,zk失效,机器重启时保证能够正常启动
*
* @param workerID
*/
private void updateLocalWorkerID(int workerID) {
File leafConfFile = new File(PROP_PATH.replace("{port}", port));
boolean exists = leafConfFile.exists();
LOGGER.info("file exists status is {}", exists);
if (exists) {
try {
FileUtils.writeStringToFile(leafConfFile, "workerID=" + workerID, false);
LOGGER.info("update file cache workerID is {}", workerID);
} catch (IOException e) {
LOGGER.error("update file cache error ", e);
}
} else {
//不存在文件,父目录页肯定不存在
try {
boolean mkdirs = leafConfFile.getParentFile().mkdirs();
LOGGER.info("init local file cache create parent dis status is {}, worker id is {}", mkdirs, workerID);
if (mkdirs) {
if (leafConfFile.createNewFile()) {
FileUtils.writeStringToFile(leafConfFile, "workerID=" + workerID, false);
LOGGER.info("local file cache workerID is {}", workerID);
}
} else {
LOGGER.warn("create parent dir error===");
}
} catch (IOException e) {
LOGGER.warn("craete workerID conf file error", e);
}
}
}
private CuratorFramework createWithOptions(String connectionString, RetryPolicy retryPolicy, int connectionTimeoutMs, int sessionTimeoutMs) {
return CuratorFrameworkFactory.builder().connectString(connectionString)
.retryPolicy(retryPolicy)
.connectionTimeoutMs(connectionTimeoutMs)
.sessionTimeoutMs(sessionTimeoutMs)
.build();
}
/**
* 上报数据结构
*/
static class Endpoint {
private String ip;
private String port;
private long timestamp;
public Endpoint() {
}
public Endpoint(String ip, String port, long timestamp) {
this.ip = ip;
this.port = port;
this.timestamp = timestamp;
}
public String getIp() {
return ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public String getPort() {
return port;
}
public void setPort(String port) {
this.port = port;
}
public long getTimestamp() {
return timestamp;
}
public void setTimestamp(long timestamp) {
this.timestamp = timestamp;
}
}
public String getZk_AddressNode() {
return zk_AddressNode;
}
public void setZk_AddressNode(String zk_AddressNode) {
this.zk_AddressNode = zk_AddressNode;
}
public String getListenAddress() {
return listenAddress;
}
public void setListenAddress(String listenAddress) {
this.listenAddress = listenAddress;
}
public int getWorkerID() {
return workerID;
}
public void setWorkerID(int workerID) {
this.workerID = workerID;
}
}
package com.sankuai.inf.leaf.snowflake.exception;
public class CheckLastTimeException extends RuntimeException {
public CheckLastTimeException(String msg){
super(msg);
}
}
package com.sankuai.inf.leaf.snowflake.exception;
public class CheckOtherNodeException extends RuntimeException {
public CheckOtherNodeException(String message) {
super(message);
}
}
package com.sankuai.inf.leaf.snowflake.exception;
public class ClockGoBackException extends RuntimeException {
public ClockGoBackException(String message) {
super(message);
}
}
package com.sankuai.inf.leaf.segment;
import com.alibaba.druid.pool.DruidDataSource;
import com.sankuai.inf.leaf.IDGen;
import com.sankuai.inf.leaf.common.PropertyFactory;
import com.sankuai.inf.leaf.common.Result;
import com.sankuai.inf.leaf.segment.dao.IDAllocDao;
import com.sankuai.inf.leaf.segment.dao.impl.IDAllocDaoImpl;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import java.io.IOException;
import java.sql.SQLException;
import java.util.Properties;
public class IDGenServiceTest {
IDGen idGen;
DruidDataSource dataSource;
@Before
public void before() throws IOException, SQLException {
// Load Db Config
Properties properties = PropertyFactory.getProperties();
// Config dataSource
dataSource = new DruidDataSource();
dataSource.setUrl(properties.getProperty("jdbc.url"));
dataSource.setUsername(properties.getProperty("jdbc.username"));
dataSource.setPassword(properties.getProperty("jdbc.password"));
dataSource.init();
// Config Dao
IDAllocDao dao = new IDAllocDaoImpl(dataSource);
// Config ID Gen
idGen = new SegmentIDGenImpl();
((SegmentIDGenImpl) idGen).setDao(dao);
idGen.init();
}
@Test
public void testGetId() {
for (int i = 0; i < 100; ++i) {
Result r = idGen.get("leaf-segment-test");
System.out.println(r);
}
}
@After
public void after() {
dataSource.close();
}
}
package com.sankuai.inf.leaf.segment;
import com.sankuai.inf.leaf.IDGen;
import com.sankuai.inf.leaf.common.Result;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations={"classpath:applicationContext.xml"}) //加载配置文件
public class SpringIDGenServiceTest {
@Autowired
IDGen idGen;
@Test
public void testGetId() {
for (int i = 0; i < 100; i++) {
Result r = idGen.get("leaf-segment-test");
System.out.println(r);
}
}
}
package com.sankuai.inf.leaf.snowflake;
import com.sankuai.inf.leaf.IDGen;
import com.sankuai.inf.leaf.common.PropertyFactory;
import com.sankuai.inf.leaf.common.Result;
import org.junit.Test;
import java.util.Properties;
public class SnowflakeIDGenImplTest {
@Test
public void testGetId() {
Properties properties = PropertyFactory.getProperties();
properties.setProperty("leaf.zk.list", "localhost:2181");
properties.setProperty("leaf.name", "leaf_test");
IDGen idGen = new SnowflakeIDGenImpl(properties.getProperty("leaf.zk.list"), 8080);
for (int i = 1; i < 1000; ++i) {
Result r = idGen.get("a");
System.out.println(r);
}
}
}
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">
<context:property-placeholder location="classpath:leaf.properties"/>
<bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource"
init-method="init" destroy-method="close">
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<!-- 配置获取连接等待超时的时间 -->
<property name="maxWait" value="10000"/>
<property name="timeBetweenEvictionRunsMillis" value="60000"/>
<property name="minEvictableIdleTimeMillis" value="300000"/>
<property name="testWhileIdle" value="true"/>
<property name="testOnBorrow" value="true"/>
<property name="testOnReturn" value="false"/>
<property name="validationQuery" value="select 1 "/>
</bean>
<bean id="idAllocDao" class="com.sankuai.inf.leaf.segment.dao.impl.IDAllocDaoImpl">
<constructor-arg ref="dataSource"/>
</bean>
<bean name="leaf" class="com.sankuai.inf.leaf.segment.SegmentIDGenImpl" init-method="init">
<property name="dao" ref="idAllocDao"/>
</bean>
</beans>
\ No newline at end of file
leaf.name=default
leaf.zk.list=localhost:2181
#jdbc.url=
#jdbc.username=
#jdbc.password=
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="info">
<appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</appenders>
<loggers>
<root level="info">
<appender-ref ref="Console"/>
</root>
</loggers>
</configuration>
#!/usr/bin/env bash
# 程序执行的根路径
PROJ_DIR=./target
# 程序目标
PROJ_TARGET_JAR=${PROJ_DIR}/leaf.jar
JAVA_CMD=java
JVM_ARGS="-jar"
CMD="${JAVA_CMD} ${JVM_ARGS} ${PROJ_TARGET_JAR}"
echo "Leaf Start--------------"
echo "JVM ARGS ${JVM_ARGS}"
echo "->"
echo "PROJ_TARGET_JAR ${PROJ_TARGET_JAR}"
echo "------------------------"
$CMD
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.sankuai.inf.leaf</groupId>
<artifactId>leaf-parent</artifactId>
<version>1.0.1</version>
</parent>
<groupId>com.sankuai.inf.leaf</groupId>
<artifactId>leaf-server</artifactId>
<version>1.0.1</version>
<packaging>jar</packaging>
<name>leaf-server</name>
<description>Leaf HTTP Server</description>
<properties>
<spring-boot-dependencies.version>2.0.6.RELEASE</spring-boot-dependencies.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring-boot-dependencies.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- Spring Boot Freemarker 依赖 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-freemarker</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.sankuai.inf.leaf</groupId>
<artifactId>leaf-core</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
</dependency>
<!--zk-->
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<exclusions>
<exclusion>
<artifactId>log4j</artifactId>
<groupId>log4j</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>${spring-boot-dependencies.version}</version>
<configuration>
<mainClass>com.sankuai.inf.leaf.server.LeafServerApplication</mainClass>
</configuration>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>compile</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/lib</outputDirectory>
<excludeTransitive>false</excludeTransitive>
<!-- 表示复制的jar文件去掉版本信息 -->
<stripVersion>false</stripVersion>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
package com.sankuai.inf.leaf.server;
public class Constants {
public static final String LEAF_SEGMENT_ENABLE = "leaf.segment.enable";
public static final String LEAF_JDBC_URL = "leaf.jdbc.url";
public static final String LEAF_JDBC_USERNAME = "leaf.jdbc.username";
public static final String LEAF_JDBC_PASSWORD = "leaf.jdbc.password";
public static final String LEAF_SNOWFLAKE_ENABLE = "leaf.snowflake.enable";
public static final String LEAF_SNOWFLAKE_PORT = "leaf.snowflake.port";
public static final String LEAF_SNOWFLAKE_ZK_ADDRESS = "leaf.snowflake.zk.address";
}
package com.sankuai.inf.leaf.server;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
@SpringBootApplication
public class LeafServerApplication {
public static void main(String[] args) {
SpringApplication.run(LeafServerApplication.class, args);
}
}
package com.sankuai.inf.leaf.server.controller;
import com.sankuai.inf.leaf.common.Result;
import com.sankuai.inf.leaf.common.Status;
import com.sankuai.inf.leaf.server.exception.LeafServerException;
import com.sankuai.inf.leaf.server.exception.NoKeyException;
import com.sankuai.inf.leaf.server.service.SegmentService;
import com.sankuai.inf.leaf.server.service.SnowflakeService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class LeafController {
private Logger logger = LoggerFactory.getLogger(LeafController.class);
@Autowired
private SegmentService segmentService;
@Autowired
private SnowflakeService snowflakeService;
@RequestMapping(value = "/api/segment/get/{key}")
public String getSegmentId(@PathVariable("key") String key) {
return get(key, segmentService.getId(key));
}
@RequestMapping(value = "/api/snowflake/get/{key}")
public String getSnowflakeId(@PathVariable("key") String key) {
return get(key, snowflakeService.getId(key));
}
private String get(@PathVariable("key") String key, Result id) {
Result result;
if (key == null || key.isEmpty()) {
throw new NoKeyException();
}
result = id;
if (result.getStatus().equals(Status.EXCEPTION)) {
throw new LeafServerException(result.toString());
}
return String.valueOf(result.getId());
}
}
package com.sankuai.inf.leaf.server.controller;
import com.sankuai.inf.leaf.segment.SegmentIDGenImpl;
import com.sankuai.inf.leaf.server.model.SegmentBufferView;
import com.sankuai.inf.leaf.segment.model.LeafAlloc;
import com.sankuai.inf.leaf.segment.model.SegmentBuffer;
import com.sankuai.inf.leaf.server.service.SegmentService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.ResponseBody;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
@Controller
public class LeafMonitorController {
private Logger logger = LoggerFactory.getLogger(LeafMonitorController.class);
@Autowired
private SegmentService segmentService;
@RequestMapping(value = "cache")
public String getCache(Model model) {
Map<String, SegmentBufferView> data = new HashMap<>();
SegmentIDGenImpl segmentIDGen = segmentService.getIdGen();
if (segmentIDGen == null) {
throw new IllegalArgumentException("You should config leaf.segment.enable=true first");
}
Map<String, SegmentBuffer> cache = segmentIDGen.getCache();
for (Map.Entry<String, SegmentBuffer> entry : cache.entrySet()) {
SegmentBufferView sv = new SegmentBufferView();
SegmentBuffer buffer = entry.getValue();
sv.setInitOk(buffer.isInitOk());
sv.setKey(buffer.getKey());
sv.setPos(buffer.getCurrentPos());
sv.setNextReady(buffer.isNextReady());
sv.setMax0(buffer.getSegments()[0].getMax());
sv.setValue0(buffer.getSegments()[0].getValue().get());
sv.setStep0(buffer.getSegments()[0].getStep());
sv.setMax1(buffer.getSegments()[1].getMax());
sv.setValue1(buffer.getSegments()[1].getValue().get());
sv.setStep1(buffer.getSegments()[1].getStep());
data.put(entry.getKey(), sv);
}
logger.info("Cache info {}", data);
model.addAttribute("data", data);
return "segment";
}
@RequestMapping(value = "db")
public String getDb(Model model) {
SegmentIDGenImpl segmentIDGen = segmentService.getIdGen();
if (segmentIDGen == null) {
throw new IllegalArgumentException("You should config leaf.segment.enable=true first");
}
List<LeafAlloc> items = segmentIDGen.getAllLeafAllocs();
logger.info("DB info {}", items);
model.addAttribute("items", items);
return "db";
}
/**
* the output is like this:
* {
* "timestamp": "1567733700834(2019-09-06 09:35:00.834)",
* "sequenceId": "3448",
* "workerId": "39"
* }
*/
@RequestMapping(value = "decodeSnowflakeId")
@ResponseBody
public Map<String, String> decodeSnowflakeId(@RequestParam("snowflakeId") String snowflakeIdStr) {
Map<String, String> map = new HashMap<>();
try {
long snowflakeId = Long.parseLong(snowflakeIdStr);
long originTimestamp = (snowflakeId >> 22) + 1288834974657L;
Date date = new Date(originTimestamp);
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
map.put("timestamp", String.valueOf(originTimestamp) + "(" + sdf.format(date) + ")");
long workerId = (snowflakeId >> 12) ^ (snowflakeId >> 22 << 10);
map.put("workerId", String.valueOf(workerId));
long sequence = snowflakeId ^ (snowflakeId >> 12 << 12);
map.put("sequenceId", String.valueOf(sequence));
} catch (NumberFormatException e) {
map.put("errorMsg", "snowflake Id反解析发生异常!");
}
return map;
}
}
package com.sankuai.inf.leaf.server.exception;
public class InitException extends Exception{
public InitException(String msg) {
super(msg);
}
}
package com.sankuai.inf.leaf.server.exception;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;
@ResponseStatus(code=HttpStatus.INTERNAL_SERVER_ERROR)
public class LeafServerException extends RuntimeException {
public LeafServerException(String msg) {
super(msg);
}
}
package com.sankuai.inf.leaf.server.exception;
import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;
@ResponseStatus(code=HttpStatus.INTERNAL_SERVER_ERROR,reason="Key is none")
public class NoKeyException extends RuntimeException {
}
package com.sankuai.inf.leaf.server.model;
public class SegmentBufferView {
private String key;
private long value0;
private int step0;
private long max0;
private long value1;
private int step1;
private long max1;
private int pos;
private boolean nextReady;
private boolean initOk;
public String getKey() {
return key;
}
public void setKey(String key) {
this.key = key;
}
public long getValue1() {
return value1;
}
public void setValue1(long value1) {
this.value1 = value1;
}
public int getStep1() {
return step1;
}
public void setStep1(int step1) {
this.step1 = step1;
}
public long getMax1() {
return max1;
}
public void setMax1(long max1) {
this.max1 = max1;
}
public long getValue0() {
return value0;
}
public void setValue0(long value0) {
this.value0 = value0;
}
public int getStep0() {
return step0;
}
public void setStep0(int step0) {
this.step0 = step0;
}
public long getMax0() {
return max0;
}
public void setMax0(long max0) {
this.max0 = max0;
}
public int getPos() {
return pos;
}
public void setPos(int pos) {
this.pos = pos;
}
public boolean isNextReady() {
return nextReady;
}
public void setNextReady(boolean nextReady) {
this.nextReady = nextReady;
}
public boolean isInitOk() {
return initOk;
}
public void setInitOk(boolean initOk) {
this.initOk = initOk;
}
}
package com.sankuai.inf.leaf.server.service;
import com.alibaba.druid.pool.DruidDataSource;
import com.sankuai.inf.leaf.IDGen;
import com.sankuai.inf.leaf.common.PropertyFactory;
import com.sankuai.inf.leaf.common.Result;
import com.sankuai.inf.leaf.common.ZeroIDGen;
import com.sankuai.inf.leaf.segment.SegmentIDGenImpl;
import com.sankuai.inf.leaf.segment.dao.IDAllocDao;
import com.sankuai.inf.leaf.segment.dao.impl.IDAllocDaoImpl;
import com.sankuai.inf.leaf.server.Constants;
import com.sankuai.inf.leaf.server.exception.InitException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import java.sql.SQLException;
import java.util.Properties;
@Service("SegmentService")
public class SegmentService {
private Logger logger = LoggerFactory.getLogger(SegmentService.class);
private IDGen idGen;
private DruidDataSource dataSource;
public SegmentService() throws SQLException, InitException {
Properties properties = PropertyFactory.getProperties();
boolean flag = Boolean.parseBoolean(properties.getProperty(Constants.LEAF_SEGMENT_ENABLE, "true"));
if (flag) {
// Config dataSource
dataSource = new DruidDataSource();
dataSource.setUrl(properties.getProperty(Constants.LEAF_JDBC_URL));
dataSource.setUsername(properties.getProperty(Constants.LEAF_JDBC_USERNAME));
dataSource.setPassword(properties.getProperty(Constants.LEAF_JDBC_PASSWORD));
dataSource.init();
// Config Dao
IDAllocDao dao = new IDAllocDaoImpl(dataSource);
// Config ID Gen
idGen = new SegmentIDGenImpl();
((SegmentIDGenImpl) idGen).setDao(dao);
if (idGen.init()) {
logger.info("Segment Service Init Successfully");
} else {
throw new InitException("Segment Service Init Fail");
}
} else {
idGen = new ZeroIDGen();
logger.info("Zero ID Gen Service Init Successfully");
}
}
public Result getId(String key) {
return idGen.get(key);
}
public SegmentIDGenImpl getIdGen() {
if (idGen instanceof SegmentIDGenImpl) {
return (SegmentIDGenImpl) idGen;
}
return null;
}
}
package com.sankuai.inf.leaf.server.service;
import com.sankuai.inf.leaf.IDGen;
import com.sankuai.inf.leaf.common.PropertyFactory;
import com.sankuai.inf.leaf.common.Result;
import com.sankuai.inf.leaf.common.ZeroIDGen;
import com.sankuai.inf.leaf.server.Constants;
import com.sankuai.inf.leaf.server.exception.InitException;
import com.sankuai.inf.leaf.snowflake.SnowflakeIDGenImpl;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import java.util.Properties;
@Service("SnowflakeService")
public class SnowflakeService {
private Logger logger = LoggerFactory.getLogger(SnowflakeService.class);
private IDGen idGen;
public SnowflakeService() throws InitException {
Properties properties = PropertyFactory.getProperties();
boolean flag = Boolean.parseBoolean(properties.getProperty(Constants.LEAF_SNOWFLAKE_ENABLE, "true"));
if (flag) {
String zkAddress = properties.getProperty(Constants.LEAF_SNOWFLAKE_ZK_ADDRESS);
int port = Integer.parseInt(properties.getProperty(Constants.LEAF_SNOWFLAKE_PORT));
idGen = new SnowflakeIDGenImpl(zkAddress, port);
if(idGen.init()) {
logger.info("Snowflake Service Init Successfully");
} else {
throw new InitException("Snowflake Service Init Fail");
}
} else {
idGen = new ZeroIDGen();
logger.info("Zero ID Gen Service Init Successfully");
}
}
public Result getId(String key) {
return idGen.get(key);
}
}
server.port=9090
spring.application.name=leaf-server
spring.freemarker.cache=false
spring.freemarker.charset=UTF-8
spring.freemarker.check-template-location=true
spring.freemarker.content-type=text/html
spring.freemarker.expose-request-attributes=true
spring.freemarker.expose-session-attributes=true
spring.freemarker.request-context-attribute=request
leaf.name=com.sankuai.leaf.opensource.test
leaf.segment.enable=false
#leaf.jdbc.url=
#leaf.jdbc.username=
#leaf.jdbc.password=
leaf.snowflake.enable=false
#leaf.snowflake.zk.address=
#leaf.snowflake.port=
\ No newline at end of file
This source diff could not be displayed because it is too large. You can view the blob instead.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Leaf</title>
<link href="/css/bootstrap.min.css" rel="stylesheet">
</head>
<body>
<table class="table table-hover">
<thead>
<tr>
<th>tag</th>
<th>max</th>
<th>step</th>
<th>update</th>
</tr>
</thead>
<tbody>
<#if items?exists>
<#list items as item>
<tr>
<td>${item.key}</td>
<td>${item.maxId}</td>
<td>${item.step}</td>
<td>${item.updateTime}</td>
</tr>
<tr>
</tr>
</#list>
</#if>
<tbody>
</table>
</body>
</html>
\ No newline at end of file
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Leaf</title>
<link href="/css/bootstrap.min.css" rel="stylesheet">
</head>
<body>
<table class="table table-hover">
<thead>
<tr>
<th>name</th>
<th>init</th>
<th>next</th>
<th>pos</th>
<th>value0</th>
<th>max0</th>
<th>step0</th>
<th>value1</th>
<th>max1</th>
<th>step1</th>
</tr>
</thead>
<tbody>
<#if data?exists>
<#list data?keys as key>
<tr>
<td>${key}</td>
<td>${data[key].initOk?string('true','false')}</td>
<td>${data[key].nextReady?string('true','false')}</td>
<td>${data[key].pos}</td>
<td>${data[key].value0}</td>
<td>${data[key].max0}</td>
<td>${data[key].step0}</td>
<td>${data[key].value1}</td>
<td>${data[key].max1}</td>
<td>${data[key].step1}</td>
</tr>
<tr>
</tr>
</#list>
</#if>
<tbody>
</table>
</body>
</html>
\ No newline at end of file
package com.sankuai.inf.leaf.server;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
@RunWith(SpringRunner.class)
@SpringBootTest
public class LeafServerApplicationTests {
@Test
public void contextLoads() {
}
}
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.sankuai.inf.leaf</groupId>
<artifactId>leaf-parent</artifactId>
<packaging>pom</packaging>
<version>1.0.1</version>
<name>Leaf</name>
<modules>
<module>leaf-core</module>
<module>leaf-server</module>
</modules>
<description>Distributed ID Generate Service</description>
<developers>
<developer>
<id>zhangzhitong</id>
<name>zhangzhitong</name>
<email>zhitongzhang@outlook.com</email>
</developer>
<developer>
<id>zhanghan</id>
<name>zhanghan</name>
<email>han122655904@163.com</email>
</developer>
<developer>
<id>xiezhaodong</id>
<name>xiezhaodong</name>
<email>pursuer_xie@foxmail.com</email>
</developer>
</developers>
<organization>
<name>Meituan-Dianping Group</name>
<url>https://github.com/Meituan-Dianping</url>
</organization>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<spring.version>5.0.13.RELEASE</spring.version>
<junit.version>4.12</junit.version>
<maven.compiler.version>3.5.1</maven.compiler.version>
<perf4j.version>0.9.16</perf4j.version>
<curator.version>4.0.1</curator.version>
<slf4j.version>1.7.25</slf4j.version>
<druid.version>1.1.12</druid.version>
<jackson-databind.version>2.9.10</jackson-databind.version>
<mysql-connector-java.version>5.1.47</mysql-connector-java.version>
<commons-io.version>2.6</commons-io.version>
<log4j.version>2.7</log4j.version>
<mybatis.version>3.5.3</mybatis.version>
<mybatis-spring.version>2.0.3</mybatis-spring.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.sankuai.inf.leaf</groupId>
<artifactId>leaf-core</artifactId>
<version>1.0.1</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>${druid.version}</version>
</dependency>
<!--zk-->
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>${curator.version}</version>
<exclusions>
<exclusion>
<artifactId>log4j</artifactId>
<groupId>log4j</groupId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>${commons-io.version}</version>
</dependency>
<dependency>
<groupId>org.mybatis</groupId>
<artifactId>mybatis</artifactId>
<version>${mybatis.version}</version>
</dependency>
<dependency>
<groupId>org.perf4j</groupId>
<artifactId>perf4j</artifactId>
<version>${perf4j.version}</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>${slf4j.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-core</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-beans</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-jdbc</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-context</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-test</artifactId>
<version>${spring.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>${log4j.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>${log4j.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>${log4j.version}</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>${mysql-connector-java.version}</version>
</dependency>
<dependency>
<groupId>org.mybatis</groupId>
<artifactId>mybatis-spring</artifactId>
<version>${mybatis-spring.version}</version>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<finalName>leaf</finalName>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven.compiler.version}</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>0.7.5.201505241946</version>
<executions>
<execution>
<id>pre-unit-test</id>
<goals>
<goal>prepare-agent</goal>
</goals>
<configuration>
<destFile>
${project.build.directory}/${project.artifactId}-jacoco.exec
</destFile>
</configuration>
</execution>
<execution>
<id>post-unit-test</id>
<phase>test</phase>
<goals>
<goal>report</goal>
</goals>
<configuration>
<dataFile>
${project.build.directory}/${project.artifactId}-jacoco.exec
</dataFile>
<outputDirectory>${project.build.directory}/jacoco</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<distributionManagement>
<repository>
<id>nexus-releases</id>
<name>Nexus Release Repository</name>
<url>http://39.100.254.140:12010/repository/maven-releases/</url>
</repository>
<snapshotRepository>
<id>nexus-snapshots</id>
<name>Nexus Snapshot Repository</name>
<url>http://39.100.254.140:12010/repository/maven-snapshots/</url>
</snapshotRepository>
</distributionManagement>
<repositories>
<repository>
<id>nexus-loit-dev</id>
<name>Nexus Repository</name>
<url>http://39.100.254.140:12010/repository/maven-public/</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>true</enabled>
</releases>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>nexus-loit-dev</id>
<name>Nexus Plugin Repository</name>
<url>http://39.100.254.140:12010/repository/maven-public/</url>
<snapshots>
<enabled>true</enabled>
</snapshots>
<releases>
<enabled>true</enabled>
</releases>
</pluginRepository>
</pluginRepositories>
</project>
DROP TABLE IF EXISTS `leaf_alloc`;
CREATE TABLE `leaf_alloc` (
`biz_tag` varchar(128) NOT NULL DEFAULT '',
`max_id` bigint(20) NOT NULL DEFAULT '1',
`step` int(11) NOT NULL,
`description` varchar(256) DEFAULT NULL,
`update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`biz_tag`)
) ENGINE=InnoDB;
\ No newline at end of file
Markdown 格式
0%
您添加了 0 到此讨论。请谨慎行事。
请先完成此评论的编辑!
注册 或者 后发表评论