🔄 卡若AI 同步 2026-02-25 09:24 | 更新:水桥平台对接、总索引与入口、运营中枢参考资料、运营中枢工作台 | 排除 >20MB: 13 个
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: 飞书管理
|
||||
description: 飞书日志/文档自动写入与知识库管理(含统一文章上传)
|
||||
triggers: 飞书日志、写入飞书、飞书知识库、飞书运营报表、派对效果数据、104场写入、运营报表填写、派对截图填表发群、Excel写飞书、批量写飞书表格、表格日报、飞书文章上传、MD转飞书JSON、同标题更新飞书
|
||||
description: 飞书日志/文档自动写入与知识库管理
|
||||
triggers: 飞书日志、写入飞书、飞书知识库、飞书运营报表、派对效果数据、104场写入、运营报表填写、派对截图填表发群、Excel写飞书、批量写飞书表格、表格日报
|
||||
owner: 水桥
|
||||
group: 水
|
||||
version: "1.1"
|
||||
@@ -279,36 +279,6 @@ python3 scripts/wanzhi_feishu_project_sync.py
|
||||
|
||||
---
|
||||
|
||||
## 统一文章上传(强制唯一入口)
|
||||
|
||||
以后凡是“写文章并上传飞书”,统一走这一条,不再分散用旧脚本。
|
||||
|
||||
### 一键命令(推荐)
|
||||
|
||||
```bash
|
||||
python3 /Users/karuo/Documents/个人/卡若AI/02_卡人(水)/水桥_平台对接/飞书管理/脚本/feishu_article_unified_publish.py \
|
||||
--parent MyvRwCVNSiTg5ok6e3fc6uA5nHg \
|
||||
--title "文章标题" \
|
||||
--md "/绝对路径/文章.md" \
|
||||
--json "/绝对路径/文章_feishu_blocks.json" \
|
||||
--webhook "https://open.feishu.cn/open-apis/bot/v2/hook/xxx"
|
||||
```
|
||||
|
||||
### 统一规则(已固化进脚本)
|
||||
|
||||
1. **先本地转 JSON**:`md_to_feishu_json.py`,自动清理分隔线/空块,减少飞书样式杂乱和报错。
|
||||
2. **同标题优先更新**:在父节点下命中同名/相似标题时更新,不重复新建。
|
||||
3. **支持图片上传**:读取 `image_paths` 上传到文档素材。
|
||||
4. **图片块失败兜底**:若飞书 API 对图片块返回 `invalid param`,正文照常写入,图片保留在文档素材(可手动插入)。
|
||||
|
||||
### 当前已验证经验(线上)
|
||||
|
||||
- 飞书 docx 接口在当前租户中,`block_type 12/18/27` 图片块都可能返回 `1770001 invalid param`。
|
||||
- 因此采用“**正文稳定写入 + 图片素材保留 + 可手动插图**”作为稳定方案。
|
||||
- 后续若飞书放开该能力,再切回全自动嵌图。
|
||||
|
||||
---
|
||||
|
||||
## Wiki 子文档创建(日记分享 / 新研究)
|
||||
|
||||
在指定飞书 Wiki 节点下创建子文档,用于日记分享、新研究等内容沉淀。
|
||||
@@ -346,9 +316,6 @@ JSON 格式:与 `团队入职流程与新人登记表_feishu_blocks.json` 相
|
||||
├── feishu_video_clip_README.md
|
||||
├── wanzhi_feishu_project_sync.py # 玩值电竞→飞书项目同步
|
||||
├── feishu_wiki_create_doc.py # Wiki 子文档创建(日记/研究)
|
||||
├── md_to_feishu_json.py # Markdown -> 飞书 blocks JSON(美观清洗)
|
||||
├── feishu_publish_blocks_with_images.py # 同标题更新 + 图片上传 + 发布
|
||||
├── feishu_article_unified_publish.py # 统一入口:文章发布(推荐唯一)
|
||||
└── .feishu_tokens.json # Token 存储
|
||||
```
|
||||
|
||||
@@ -384,5 +351,5 @@ python3 /Users/karuo/Documents/个人/卡若AI/02_卡人(水)/飞书管理/s
|
||||
|
||||
---
|
||||
|
||||
**版本**: v3.4 | **更新**: 2026-02-25
|
||||
**特性**: 静默授权、倒序插入、TNTWF规范、四象限分类、**写入完成后自动打开飞书日志页面**、**运营报表子技能(截图→填表→发群竖状格式、会议纪要图片上传、月度统计)**、**统一文章上传(MD->JSON->飞书,同标题优先更新,支持图片上传)**
|
||||
**版本**: v3.3 | **更新**: 2026-02-20
|
||||
**特性**: 静默授权、倒序插入、TNTWF规范、四象限分类、**写入完成后自动打开飞书日志页面**、**运营报表子技能(截图→填表→发群竖状格式、会议纪要图片上传、月度统计)**
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"access_token": "u-57b3lfJ8Z7XH0uHVO_Tnx2l5moqBk1qXXEaaFAM00wS6",
|
||||
"refresh_token": "ur-579x5jnChbCqtQoGJxgcPpl5moMBk1Uph8aaEBw00xzj",
|
||||
"access_token": "u-4Tr54dmqV8lE_qtfG76A2Il5mMMBk1irW8aaVBM00wO2",
|
||||
"refresh_token": "ur-4iCTU0PcheAVQUi_Z43c9El5koO5k1MpV8aaIQw00wCn",
|
||||
"name": "飞书用户",
|
||||
"auth_time": "2026-02-25T05:58:38.159179"
|
||||
"auth_time": "2026-02-25T09:19:23.848992"
|
||||
}
|
||||
235
02_卡人(水)/水桥_平台对接/飞书管理/脚本/feishu_publish_md_direct.py
Normal file
235
02_卡人(水)/水桥_平台对接/飞书管理/脚本/feishu_publish_md_direct.py
Normal file
@@ -0,0 +1,235 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
MD 原文直传飞书(不转飞书 JSON)
|
||||
|
||||
特性:
|
||||
1) 直接读取 .md 原文,按行原样写入(保留 Markdown 符号)
|
||||
2) 同名/相似标题优先更新,不重复新建
|
||||
3) 不处理图片上传与替换(纯原文直传)
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import re
|
||||
from pathlib import Path
|
||||
import requests
|
||||
import sys
|
||||
|
||||
SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
import feishu_wiki_create_doc as fwd
|
||||
|
||||
|
||||
def _text_block(content: str) -> dict:
|
||||
return {
|
||||
"block_type": 2,
|
||||
"text": {
|
||||
"elements": [{"text_run": {"content": content, "text_element_style": {}}}],
|
||||
"style": {},
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def _normalize_title(t: str) -> str:
|
||||
s = (t or "").strip().lower()
|
||||
s = re.sub(r"[((][^))]*[))]\s*$", "", s)
|
||||
s = re.sub(r"[\s\-—_·::]+", "", s)
|
||||
return s
|
||||
|
||||
|
||||
def _is_similar_title(a: str, b: str) -> bool:
|
||||
na, nb = _normalize_title(a), _normalize_title(b)
|
||||
if not na or not nb:
|
||||
return False
|
||||
if na == nb:
|
||||
return True
|
||||
if len(na) >= 6 and na in nb:
|
||||
return True
|
||||
if len(nb) >= 6 and nb in na:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def parse_title(md_text: str, fallback: str) -> str:
|
||||
for line in md_text.splitlines():
|
||||
if line.startswith("# "):
|
||||
return line[2:].strip()
|
||||
return fallback
|
||||
|
||||
|
||||
def find_existing(parent_token: str, title: str, headers: dict) -> tuple[str | None, str | None, str | None]:
|
||||
r = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/wiki/v2/spaces/get_node?token={parent_token}",
|
||||
headers=headers,
|
||||
timeout=30,
|
||||
)
|
||||
j = r.json()
|
||||
if j.get("code") != 0:
|
||||
return None, None, None
|
||||
node = j["data"]["node"]
|
||||
space_id = node.get("space_id") or (node.get("space") or {}).get("space_id") or node.get("origin_space_id")
|
||||
if not space_id:
|
||||
return None, None, None
|
||||
|
||||
page_token = None
|
||||
while True:
|
||||
params = {"parent_node_token": parent_token, "page_size": 50}
|
||||
if page_token:
|
||||
params["page_token"] = page_token
|
||||
nr = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/wiki/v2/spaces/{space_id}/nodes",
|
||||
headers=headers,
|
||||
params=params,
|
||||
timeout=30,
|
||||
)
|
||||
nj = nr.json()
|
||||
if nj.get("code") != 0:
|
||||
return None, None, None
|
||||
data = nj.get("data", {}) or {}
|
||||
nodes = data.get("nodes", []) or data.get("items", []) or []
|
||||
for n in nodes:
|
||||
node_title = n.get("title", "") or n.get("node", {}).get("title", "")
|
||||
if _is_similar_title(node_title, title):
|
||||
obj = n.get("obj_token")
|
||||
node_token = n.get("node_token")
|
||||
return (obj or node_token), node_token, node_title
|
||||
page_token = data.get("page_token")
|
||||
if not page_token:
|
||||
break
|
||||
return None, None, None
|
||||
|
||||
|
||||
def resolve_doc_token(node_token: str, headers: dict) -> str:
|
||||
r = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/wiki/v2/spaces/get_node?token={node_token}",
|
||||
headers=headers,
|
||||
timeout=30,
|
||||
)
|
||||
j = r.json()
|
||||
if j.get("code") != 0:
|
||||
raise RuntimeError(f"get_node 失败: {j.get('msg')}")
|
||||
return j["data"]["node"].get("obj_token") or node_token
|
||||
|
||||
|
||||
def clear_doc_blocks(doc_token: str, headers: dict) -> bool:
|
||||
# 该接口在部分租户会 field validation failed,失败就返回 False(后续走追加)
|
||||
all_items = []
|
||||
page_token = None
|
||||
while True:
|
||||
params = {"page_size": 100}
|
||||
if page_token:
|
||||
params["page_token"] = page_token
|
||||
r = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/docx/v1/documents/{doc_token}/blocks",
|
||||
headers=headers,
|
||||
params=params,
|
||||
timeout=30,
|
||||
)
|
||||
j = r.json()
|
||||
if j.get("code") != 0:
|
||||
return False
|
||||
data = j.get("data", {}) or {}
|
||||
all_items.extend(data.get("items", []) or [])
|
||||
page_token = data.get("page_token")
|
||||
if not page_token:
|
||||
break
|
||||
child_ids = [b["block_id"] for b in all_items if b.get("parent_id") == doc_token and b.get("block_id")]
|
||||
if not child_ids:
|
||||
return True
|
||||
for i in range(0, len(child_ids), 50):
|
||||
batch = child_ids[i : i + 50]
|
||||
rd = requests.delete(
|
||||
f"https://open.feishu.cn/open-apis/docx/v1/documents/{doc_token}/blocks/{doc_token}/children/batch_delete",
|
||||
headers=headers,
|
||||
json={"block_id_list": batch},
|
||||
timeout=30,
|
||||
)
|
||||
if rd.json().get("code") != 0:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def create_node(parent_token: str, title: str, headers: dict) -> tuple[str, str]:
|
||||
r = requests.get(
|
||||
f"https://open.feishu.cn/open-apis/wiki/v2/spaces/get_node?token={parent_token}",
|
||||
headers=headers,
|
||||
timeout=30,
|
||||
)
|
||||
j = r.json()
|
||||
if j.get("code") != 0:
|
||||
raise RuntimeError(f"get_node 失败: {j.get('msg')}")
|
||||
node = j["data"]["node"]
|
||||
space_id = node.get("space_id") or (node.get("space") or {}).get("space_id") or node.get("origin_space_id")
|
||||
if not space_id:
|
||||
raise RuntimeError("无法获取 space_id")
|
||||
cr = requests.post(
|
||||
f"https://open.feishu.cn/open-apis/wiki/v2/spaces/{space_id}/nodes",
|
||||
headers=headers,
|
||||
json={"parent_node_token": parent_token, "obj_type": "docx", "node_type": "origin", "title": title},
|
||||
timeout=30,
|
||||
)
|
||||
cj = cr.json()
|
||||
if cj.get("code") != 0:
|
||||
raise RuntimeError(f"创建节点失败: {cj.get('msg')}")
|
||||
node = cj["data"]["node"]
|
||||
return (node.get("obj_token") or node.get("node_token")), node.get("node_token")
|
||||
|
||||
|
||||
def write_raw_md_lines(doc_token: str, headers: dict, md_text: str) -> None:
|
||||
lines = md_text.splitlines()
|
||||
blocks = [_text_block(line) for line in lines if line is not None]
|
||||
for i in range(0, len(blocks), 50):
|
||||
batch = blocks[i : i + 50]
|
||||
r = requests.post(
|
||||
f"https://open.feishu.cn/open-apis/docx/v1/documents/{doc_token}/blocks/{doc_token}/children",
|
||||
headers=headers,
|
||||
json={"children": batch},
|
||||
timeout=30,
|
||||
)
|
||||
j = r.json()
|
||||
if j.get("code") != 0:
|
||||
raise RuntimeError(f"写入失败: {j.get('msg')}")
|
||||
|
||||
|
||||
def main():
|
||||
ap = argparse.ArgumentParser()
|
||||
ap.add_argument("--parent", default="MyvRwCVNSiTg5ok6e3fc6uA5nHg", help="Wiki 父节点 token")
|
||||
ap.add_argument("--md", required=True, help="Markdown 文件路径")
|
||||
ap.add_argument("--title", default="", help="可选,覆盖 MD 第一行标题")
|
||||
args = ap.parse_args()
|
||||
|
||||
md_path = Path(args.md).expanduser().resolve()
|
||||
if not md_path.exists():
|
||||
raise SystemExit(f"❌ MD 不存在: {md_path}")
|
||||
md_text = md_path.read_text(encoding="utf-8")
|
||||
title = args.title.strip() if args.title.strip() else parse_title(md_text, md_path.stem)
|
||||
|
||||
token = fwd.get_token(args.parent)
|
||||
if not token:
|
||||
raise SystemExit("❌ Token 无效,请先授权")
|
||||
headers = {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}
|
||||
|
||||
print("=" * 50)
|
||||
print("📤 MD 原文直传飞书(不转 JSON)")
|
||||
print(f"父节点: {args.parent}")
|
||||
print(f"标题: {title}")
|
||||
print(f"文件: {md_path}")
|
||||
print("=" * 50)
|
||||
|
||||
doc_token, node_token, hit_title = find_existing(args.parent, title, headers)
|
||||
if doc_token and node_token:
|
||||
print(f"📋 命中相似标题,更新: {hit_title}")
|
||||
if clear_doc_blocks(doc_token, headers):
|
||||
print("✅ 已清空原内容")
|
||||
else:
|
||||
print("⚠️ 清空失败,改为追加")
|
||||
else:
|
||||
doc_token, node_token = create_node(args.parent, title, headers)
|
||||
print(f"✅ 新建文档: {node_token}")
|
||||
|
||||
write_raw_md_lines(doc_token, headers, md_text)
|
||||
print(f"✅ 上传完成: https://cunkebao.feishu.cn/wiki/{node_token}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
Reference in New Issue
Block a user