|
196 | 196 |
|
197 | 197 |
|
198 | 198 |
|
199 | | - |
200 | | - |
201 | 199 |
|
202 | 200 |
|
203 | 201 |
|
204 | 202 |
|
205 | 203 |
|
206 | 204 |
|
207 | 205 |
|
| 206 | + |
| 207 | + |
208 | 208 |
|
209 | 209 |
|
210 | 210 |
|
|
219 | 219 |
|
220 | 220 | <article class="prose"> |
221 | 221 | <p>Hi, I'm <strong>Chojan Shang</strong> (aka <strong>PsiACE</strong>). |
222 | | -I focus on engineering data and AI systems that are predictable, observable, and maintainable — not one-off demos.</p> |
| 222 | +I focus on engineering data and AI systems with clear contracts, observable operations, and auditable workflows. |
| 223 | +Not one‑off demos, but maintainable and measurable product capabilities.</p> |
223 | 224 | <h2 id="experience-responsibilities">Experience & Responsibilities</h2> |
224 | 225 | <p><strong>GenAI Engineer | Vesoft Inc. (NebulaGraph)</strong><br /> |
225 | 226 | 2024.09 – present · Shanghai</p> |
226 | 227 | <ul> |
227 | | -<li>Deliver GraphRAG and Generative AI for graph-database scenarios across retrieval, Q&A, and recommendation.</li> |
228 | | -<li>Lead the architecture and engineering of the NebulaGraph GenAI Platform: ingest → retrieval (vector/graph) → inference; establish unified observability (latency, recall, hit rate).</li> |
229 | | -<li>Design and implement a Fusion GraphRAG framework that fuses structural graph knowledge with semantic retrieval; achieved ~30% faster time-to-first-use and ~15% precision uplift at the same compute.</li> |
230 | | -<li>Drive LLM engineering integration with the Nebula ecosystem (schema, data sync, auth/audit); codify reusable components and best practices to power QA and recommendation systems.</li> |
| 228 | +<li>Deliver GraphRAG and generative AI in graph‑database scenarios across retrieval, Q&A, and recommendation.</li> |
| 229 | +<li>Lead the architecture and engineering of the NebulaGraph GenAI Platform: ingest → retrieval (vector/graph) → inference.</li> |
| 230 | +<li>Design and implement a Fusion GraphRAG framework that fuses structural graph knowledge with semantic retrieval to optimize context building for complex relational QA.</li> |
| 231 | +<li>Advance LLM engineering workflows and codify reusable components and best practices.</li> |
231 | 232 | </ul> |
232 | 233 | <p><strong>PMC Member | Apache OpenDAL</strong><br /> |
233 | 234 | 2022 – present · Remote</p> |
234 | 235 | <ul> |
235 | | -<li>Lead and contribute to the unified data access layer's architecture and releases; focus on caching, retries, observability, and multi-cloud backend abstraction.</li> |
236 | | -<li>Review critical PRs and design proposals; improve API consistency and platform stability; optimize data paths and metrics for high-concurrency / high-latency scenarios.</li> |
237 | | -<li>Build community norms and engineering standards; strengthen docs and test examples; mentor contributors and manage OSPP cadence; helped the project graduate to Apache TLP.</li> |
| 236 | +<li>Contribute to and help lead the unified data access layer’s architecture and releases, focusing on caching, retries, observability, and multi‑cloud backend abstraction.</li> |
| 237 | +<li>Review critical PRs and design topics, driving API consistency and platform stability; optimize data paths for high‑concurrency and high‑latency scenarios.</li> |
| 238 | +<li>Build community norms and engineering standards, improve docs and examples, mentor contributors, and coordinate OSPP participation to improve contributor retention and release cadence.</li> |
238 | 239 | </ul> |
239 | | -<p><strong>Core Engineer | Databend Labs (cloud-native data warehouse)</strong><br /> |
| 240 | +<p><strong>Core Engineer | Databend Labs (cloud‑native data warehouse)</strong><br /> |
240 | 241 | 2021.07 – 2024.08 · Remote</p> |
241 | 242 | <ul> |
242 | | -<li>Founding team member; built baseline query execution and storage access layers; unified I/O model around Apache Arrow/Parquet.</li> |
243 | | -<li>Led integrations with lakeFS and KubeSphere; delivered a unified access layer and caching for cloud-native data-lake scenarios.</li> |
244 | | -<li>Optimized execution and storage I/O to support high-throughput concurrent workloads; project stars grew from ~800 to 8000+ in three years.</li> |
245 | | -<li>Drove developer advocacy and community programs that turned Databend into a representative OSS data warehouse.</li> |
| 243 | +<li>Founding team member; built baseline query execution and storage access layers; unified I/O around Apache Arrow/Parquet.</li> |
| 244 | +<li>Led integrations and case studies with HuggingFace, lakeFS, and KubeSphere to broaden data‑lake/cloud‑native scenarios.</li> |
| 245 | +<li>Drove developer advocacy and community operations; the project grew from ~800 to 8000+ stars in three years and became a representative OSS data warehouse.</li> |
246 | 246 | </ul> |
247 | 247 | <h2 id="focus-areas">Focus Areas</h2> |
248 | 248 | <ul> |
|
0 commit comments