Skip to content

Commit eab168d

Browse files
committed
refactor
1 parent a8393cb commit eab168d

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+3874
-60
lines changed

content/images/kafka-design-1.jpg

87.8 KB
Loading

content/images/kafka-design-2.png

29.9 KB
Loading

content/images/kafka-design-3.jpg

70.2 KB
Loading
65.1 KB
Loading
Loading

content/posts/arraylist-toarray.md

Lines changed: 185 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,185 @@
1+
---
2+
title: "关于 ArrayList.toArray() 和 Arrays.asList().toArray()方法"
3+
date: 2017-02-18
4+
draft: false
5+
categories: ["Java"]
6+
tags: [ "Java", "String"]
7+
---
8+
9+
10+
## 引言
11+
12+
最近在项目中调用一个接口, 接口的返回类型是 `Map<String, Object>`,且put进去的值类型是`List<Dog>`,当我取出进行强制类型转化的时候却抛出了`ClassCastException`,情景如下:
13+
14+
```java
15+
public static void test2() {
16+
List<Dog> list = new ArrayList<>();
17+
list.add(new Dog());
18+
System.out.println(list.toArray().getClass()); // 其实此时数组类型已为 [Ljava.lang.Object;
19+
20+
Map<String, Object> dataMap = Maps.newHashMap();
21+
dataMap.put("x", list.toArray());
22+
Dog[] d = (Dog[]) dataMap.get("x"); // 所以此时会抛出 ClassCastException
23+
}
24+
```
25+
26+
所以在使用toArray方法的时候要确实理解。
27+
28+
## ArrayList.toArray() 理解
29+
30+
通过源码我们可以看到返回的是Object类型的数组,失去了原有的实际类型,虽然底层存储是具体类型的对象,这也正体现了文档中说的:该方法起到了bridge的作用(This method acts as bridge between array-based and collection-based APIs)。
31+
32+
```java
33+
public Object[] toArray() {
34+
return Arrays.copyOf(elementData, size);
35+
}
36+
```
37+
38+
但是如果我们使用 Arrays.asList 就不会出现上述的问题。
39+
40+
```java
41+
public static void test1() {
42+
List<Dog> list = Arrays.asList(new Dog(), new BigDog());
43+
44+
System.out.println(list.toArray().getClass());
45+
46+
Map<String, Object> dataMap = Maps.newHashMap();
47+
dataMap.put("x", list.toArray());
48+
Dog[] d = (Dog[]) dataMap.get("x");
49+
}
50+
```
51+
52+
对于 java.util.ArrayList 我们可以使用 toArray(T[] a) 方法来返回指定返回数组的类型。
53+
54+
```java
55+
public <T> T[] toArray(T[] a) {
56+
if (a.length < size)
57+
// Make a new array of a's runtime type, but my contents:
58+
return (T[]) Arrays.copyOf(elementData, size, a.getClass());
59+
System.arraycopy(elementData, 0, a, 0, size);
60+
if (a.length > size)
61+
a[size] = null;
62+
return a;
63+
}
64+
```
65+
66+
## Arrays.asList().toArray()理解
67+
68+
工具类Arrays的asList()方法实际中经常会用到,用于把指定的对象包装成一个固定大小的对象数组,但是其返回的ArrayList是其内部类,不同于java.util.ArrayList。
69+
70+
```java
71+
public static <T> List<T> asList(T... a) {
72+
return new ArrayList<>(a);
73+
}
74+
75+
private static class ArrayList<E> extends AbstractList<E>
76+
implements RandomAccess, java.io.Serializable
77+
{
78+
private static final long serialVersionUID = -2764017481108945198L;
79+
// 实际存储有保留原始类型
80+
private final E[] a;
81+
82+
ArrayList(E[] array) {
83+
a = Objects.requireNonNull(array);
84+
}
85+
86+
@Override
87+
public int size() {
88+
return a.length;
89+
}
90+
91+
@Override
92+
public Object[] toArray() {
93+
return a.clone();
94+
}
95+
96+
@Override
97+
@SuppressWarnings("unchecked")
98+
public <T> T[] toArray(T[] a) {
99+
int size = size();
100+
if (a.length < size)
101+
return Arrays.copyOf(this.a, size,
102+
(Class<? extends T[]>) a.getClass());
103+
System.arraycopy(this.a, 0, a, 0, size);
104+
if (a.length > size)
105+
a[size] = null;
106+
return a;
107+
}
108+
109+
@Override
110+
public E get(int index) {
111+
return a[index];
112+
}
113+
114+
@Override
115+
public E set(int index, E element) {
116+
E oldValue = a[index];
117+
a[index] = element;
118+
return oldValue;
119+
}
120+
121+
@Override
122+
public int indexOf(Object o) {
123+
E[] a = this.a;
124+
if (o == null) {
125+
for (int i = 0; i < a.length; i++)
126+
if (a[i] == null)
127+
return i;
128+
} else {
129+
for (int i = 0; i < a.length; i++)
130+
if (o.equals(a[i]))
131+
return i;
132+
}
133+
return -1;
134+
}
135+
136+
@Override
137+
public boolean contains(Object o) {
138+
return indexOf(o) != -1;
139+
}
140+
141+
@Override
142+
public Spliterator<E> spliterator() {
143+
return Spliterators.spliterator(a, Spliterator.ORDERED);
144+
}
145+
146+
@Override
147+
public void forEach(Consumer<? super E> action) {
148+
Objects.requireNonNull(action);
149+
for (E e : a) {
150+
action.accept(e);
151+
}
152+
}
153+
154+
@Override
155+
public void replaceAll(UnaryOperator<E> operator) {
156+
Objects.requireNonNull(operator);
157+
E[] a = this.a;
158+
for (int i = 0; i < a.length; i++) {
159+
a[i] = operator.apply(a[i]);
160+
}
161+
}
162+
163+
@Override
164+
public void sort(Comparator<? super E> c) {
165+
Arrays.sort(a, c);
166+
}
167+
}
168+
```
169+
170+
这里虽然我们看到 toArray 方法返回的依然是 Object[],但是与 java.util.ArrayList 不同的是这里底层存储是泛型类型的数组 private final E[] a,所以保留了实际的类型,如下:
171+
172+
```java
173+
public static void test5() {
174+
Object[] objs = new Dog[1];
175+
System.out.println(objs.getClass()); // 类型是 [Lcom.vonzhou.learn.other.ClassDemo$Dog
176+
177+
Object[] objs2 = new Object[1];
178+
objs2[0] = new Dog();
179+
System.out.println(objs2.getClass()); // 类型是 [Ljava.lang.Object;
180+
}
181+
```
182+
183+
## 总结
184+
185+
类型系统很复杂,这里只是看到了表象。

content/posts/kafka-design.md

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
---
2+
title: "Kafka的设计"
3+
date: 2016-11-27
4+
draft: false
5+
categories: ["Kafka"]
6+
tags: [ "Kafka"]
7+
---
8+
9+
![哲学思想](/images/kafka-design-1.jpg)
10+
11+
本文是阅读Kafka文档的一点笔记。
12+
13+
## 概要
14+
15+
定义❓ 消息队列源于IPC,Unix中的IPC模型如下:
16+
17+
![IPC模型](/images/kafka-design-2.png)
18+
19+
20+
消息队列的特点❓
21+
22+
* IPC
23+
* 解耦,异步处理
24+
* 发布/订阅模式
25+
26+
分布式环境❓
27+
28+
* 消息中间件
29+
* 容错,可扩展性
30+
* ActiveMQ, Kafka, RabbitMQ, ZeroMQ, RocketMQ
31+
32+
33+
## Kafka的设计
34+
35+
☐ distributed, real-time processing
36+
37+
☐ partitioning
38+
39+
☐ producer/consumer group
40+
41+
☐ pagecache-centric
42+
43+
## 持久化
44+
45+
* 磁盘并没有想象中的那么慢,特别是顺序写的时候(OS优化、预取、批量写)。
46+
47+
|顺序写 | 随机写|
48+
|---|---|
49+
|600MB/sec | 100k/sec|
50+
51+
* 索引结构采用消费队列(而不是BTree)
52+
53+
## 高效
54+
55+
大量小的IO操作? 批量操作(larger network packets, larger sequential disk operations, contiguous memory blocks)均摊网络通信的开销。
56+
57+
大量字节拷贝? 使用零拷贝技术,如Linux下的sendfile系统调用。
58+
59+
## Broker
60+
61+
* 存储
62+
* 多副本
63+
* 日志清理
64+
65+
## Producer
66+
67+
* Load balancing(random, hash func)
68+
* Asynchronous send(latency vs throughput)
69+
70+
71+
## Consumer
72+
* pull
73+
* consumer position
74+
75+
![组件交互](/images/kafka-design-3.jpg)

0 commit comments

Comments
 (0)